More docs on API docs

I’ve tabulated my entire collection of docs on API docs for your perusing pleasure in my API documentation bibliography. The documents for which I could find links to online copies have links. The rest have restricted access. Fortunately (and somewhat surprisingly) only 14 (12.5%) of the 112 articles I’ve found so far are restricted.

The articles are sorted by year published and title so the most recent publications will appear at the top. I’ll check in every once in a while to see what’s new in the field and update the list as necessary.

I’ve reviewed each of these to varying degrees of detail. Some are, admittedly, very detailed and read like a functional software specification (i.e. as dry as the Atacama), so in those, I read just enough to pull out the classification details.

I didn’t plan this (nor was I expecting it), but the author-affiliations of all these articles break down as:

  • 47.3% of the articles were written by academic authors
  • 42.0% of the articles were written by industry authors
  • 8.9% of the articles were written by a mix (collaboration?) of industry and academic authors

Continue reading “More docs on API docs”

How to read survey data

As it gets closer to our (American) mid-term elections, we’re about to be inundated with surveys and polls. But, even between elections, surveys are everywhere, for better or worse.

To help filter the signals from the noise, here is my list of tips for critically reading reports based on survey data that I’ve collected over the years.

If you’re a reader of survey data, use these tips to help you interpret survey data you see in the future.

If you’re publishing survey data, be sure to consider these as well, especially if your readers have read this post.

To critically read survey data, you need to know:

  1. Who was surveyed and how
  2. What they were asked
  3. How the results are stated

Let’s look at  each of these a bit more…

Continue reading “How to read survey data”

More articles on API documentation

I’ve just collected some more articles for my bibliography of API documentation-related articles and the trend I saw earlier this year hasn’t changed much. In all fairness, eight months is probably not enough time to see a change given the pace of academic publishing. I now have 114 articles in my list of API documentation-related topics.

114!

Searching for API DOCUMENTATION produces a lot of hits on actual API documentation (good news: there’s a lot of API Documentation out there!) Searching for Writing API Documentation produces more articles relevant to what I’m looking for. I’ve also merged my academic and non-academic API documentation bibliographic data, so I can compare and contrast them together.

The merged lists have these characteristics:

114 articles! and I know I haven’t found them all, yet.

  • 71% (81/114) of the articles are from CS-oriented sources
  • 29% (33/114) are from TC-oriented sources
  • 81% of the CS-oriented articles are from edited publications (books, journals)
  • 27% of the TC-oriented articles are from edited pubs.
  • 27% (31/114) of the API documentation articles were published in 2017

So, what does this mean?

Continue reading “More articles on API documentation”

New articles on API documentation

These are some the new articles I found while browsing Google Scholar on the subject of API documentation since my last update on the topic. I found about 20-some new articles since January, which is exciting! Here’s a review of just the ones I’ve had a chance to skim and write up, so far.

This recent trove of documents makes me happy and sad as did the papers I reviewed in an earlier post. These articles contain lots of great research into API documentation, who uses it, and how it’s used. All of these are available for download and I would encourage you to do that if you are interested in API documentation.

At the same time, I’m still disappointed that there has been absolutely zero research published on the topic from the technical communication community this year (was I really the only one who was doing that lately?!) and little to no reference to anything tech-comm related in these papers. If you know of some API research published in a tech-comm venue (and that’s not already in my list), please let me know. At this point, all this is just a warning that some of my disappointment might seep through into my article reviews.

Many of these articles (3/4) had both academic and industry authors, which suggests that industry-academy partnerships aren’t that unusual in the research of API Documentation—if you’re in computer science. In tech comm, as Tom Johnson’s recent survey laments, not so much.

Some of these articles cite earlier research in developer user studies and also contribute to that body of work. None of them, however, cite what I would call writing or reading research. That wouldn’t be so troubling, if it were just these articles, but I can’t think of an article on the topic of API documentation published in a computer science venue that talks about actually reading and using the content. At the same time, there now seems to be a recognizable genre in API documentation literature that Head et al. describes as, “finding anti-patterns in documentation” (see reviews below).

So, here’s a short review of four of the articles I read this morning.

Continue reading “New articles on API documentation”

The first piClinic articles have gone live!

Life’s a beach!

Although I’ve been in the field conducting research for the past month (in places, such as depicted in the photo), I still managed to publish and “present” several research papers that have to do with the piClinic Console. More are still in the pipeline, so stay tuned…

In Using Independent Studies to Enhance Usability Assessment Skills in a Generalist Program, co-authored with Dr. Pam Estes Brewer, Associate Professor in my department, we talk about how we used the development of the piClinic Console as an independent-study project for one of her usability research students. Dr. Brewer presented this paper July 23 at the IEEE PROCOMM conference in Toronto. The short story is the project provided an excellent challenge for her student and her student provided vital usability research data that informed the design’s iterations throughout the year.  The paper also describes some of the other projects we’ve used to help develop future usability researchers.

Enriching Technical Communication Education: Collaborating Across Disciplines and Cultures to Develop the piClinic Console, was just presented in Milwaukee, WI at the 36th ACM International Conference on the Design of Communication. Instead of a personal appearance, I sent them this video to present in my stead. The paper details the design process and how it was applied to our technical communication curriculum. For example, as the usability research independent study project described in the preceding paper. Other tech comm lessons the project has produced include some visual UI design, the production of the promotional video that appears on piclinic.org, and several projects for the computer engineering department. The video, on the other hand, provides some of the back story behind the project.

For more interesting articles, see the complete list of my publications.

Yes, YOU can write docs as cool as Twilio’s

After attending Write the Docs 2018 (Portland, edition) and watching a YouTube video, I’ve got it figured out: How you (i.e. anyone) can create documentation as cool as Twilio’s–a consistent entry in lists of the best API documentation examples.

All you need to do is follow these six steps:

  1. Think, plan, and work iteratively.
  2. Infuse user research and analytics into your entire writing process.
  3. Treat your documentation as a product.
  4. Do what’s best for your documentation customer.
  5. Create and support a writing process that enables contributions from outside of the writing team.
  6. Build the CMS that supports all of the above.

That’s it!

You’ll need to understand that it took them a few years to get to where they are, today. And, it sounds like they’ll keep moving the bar.

But, don’t be discouraged. The sooner you start, the sooner those three to five years will be behind you.

A little background

Twilio’s Kat King opened Write the Docs 2018 in Portland this May, starting the conference off with a strong sense of user testing and research. That always gets my attention at a tech-writing conference. The next session by Jen Lambourne continued by describing how she applied user research on documentation. I’ve been going to tech writing conferences for quite a while and I can’t recall going to one with such a user-research focus.

I’ll make references to these videos, in what follows in case you want to go straight to the source (something I always encourage). The practices they describe, however, are straight out of user-centered-design. You can also see some of my favorite books on the topic for more background.

What’s noteworthy here is that the Twilio team has applied them successfully to their documentation–proving, if nothing else, that documentation works like any other product (and perhaps works best when treated as one).

So, here’s how the steps work:

Continue reading “Yes, YOU can write docs as cool as Twilio’s”

Tech comm education and the job market

One of the many informative experiences I had at this year’s WriteTheDocs 2018 conference was my chat with employers/recruiters at the Job Fair. My primary goal was to learn what they looked for in their technical communication interns and entry level recruits, so I could bring that back to school. What I learned was not surprising, having only recently left industry for academia, but it was a wake-up call.

Recruiters want people (even their interns) who have experience and they want current and relevant experience.

I agree, that’s a bit of a Dog bites man story. Of course they do. All other things being equal, having current and relevant experience is almost always better than not. I suspect it is because the assumption is that with it, the person will become a productive team member sooner than later. What I think is often overlooked is how other idiosyncratic aspects of the specific job might have a greater influence in the time to productivity, such as the onboarding support (discussed in Sarah Day’s talk at this year’s conference), but that’s not something that I, as an educator, have much control over (unless you’re hiring me to consult on the process, of course).

What an intern needs

One of the things I have to keep in mind when reviewing my notes is that I was at a software documentation conference and the companies I talked to were software companies with open-source software products to document. In that context, it was natural that they were looking for writers with experience in their product domains: open-source tool experience (Git, GitHub, Markdown, etc.) They also looked for a passion for documentation—as evidenced by experience in such tools (e.g. documenting an open-source project in GitHub). I don’t know how many of these observations transfer to looking for writers in other industries.

Continue reading “Tech comm education and the job market”

Write the Docs 2018

Had an amazing experience at the 2018 WriteTheDocs conference in Portland, OR, last week. This was the first time I’ve been able to attend the conference, although I’ve followed it for years. While you can watch the formal talks from this, and previous, conferences on YouTube, it really is nothing like attending in person. What you don’t get to experience from the excellent YouTube videos, that is literally tangible in person during the conference, is the supportive sense of camaraderie, support, and acceptance from the participants.

The talks were a collection of first-person testimonials and experience reports by experienced and practicing (well, except for me, perhaps) technical writers and managers. The talks all had a practical focus and tangible, practical takeaways that the audience could take with them and put to work on the job.

Bottom line, if you are a technical writer and can go to only one conference a year, I recommend this one quite highly.

The downside to the conference is ther are too many places to be at once. In addition to the formal talks in the ballroom, a parallel “unconference” and job fair took place simultaneously in a smaller room below the ballroom. To complicate matters further, there were all the people I knew from the WriteTheDocs Slack to meet in person.

Continue reading “Write the Docs 2018”

Collecting feedback about your documentation

KC-135R Engine Instruments showing various indications during testing
Lots o’ data

In my thread on user interactions with documentation, I suggested that you want your measurement instrument, usually some form of survey question(s), to influence the experience you’re trying to measure as little as possible. From a measurement standpoint, that’s nothing new. You never want your measurements to influence that which you’re measuring to prevent contaminating your measurement.

In the case of measuring feedback, or in this case, documentation or experience feedback, a recent tweet by Nate Silver (@NateSilver538) described how the use and perceived use of the measurement system had contaminated the data collected by Uber/Lyft (and countless other services).  He tweeted, “Given that an equilibrium has emerged where any rating lower than 5 stars means your Uber/Lyft driver was bad, they should probably just replace the 5-star scale with a simple thumbs-up/thumbs-down.

Continue reading “Collecting feedback about your documentation”

From getting started to getting finished

As a technical writer who occasionally does some development on the side, I appreciate a fast Time to First Hello World (TTFHW)—the time it would take to create and run a simple program using a new (to me) API. The time to get started, in other words. The Nest API documentation I recently reviewed  reminded me of a much less frequently mentioned notion, the time to get finished.

If you’re not familiar with TTFHW, it is basically, how long it takes a new user to:

  1. Find your product
  2. Resonate with its value proposition
  3. Test it out to verify it provides them with a viable solution

The metric for this is operationalized by how long it takes someone to write a simple program to demonstrate step 3. From a measurement standpoint, this is a reasonable goal in that:

  1. It’s tangible and relatively unambiguous in its meaning (the reader was successful in accomplishing the goal).
  2. It can be measured accurately if your help documentation provides a way to link the documentation to the use of the API, through a user-specific key or interactive development environment.
  3. It can show a tangible engagement metric to show that your product is attracting and engaging potential customers.

While it’s good to use that as a metric for engagement and potential adoption, it’s important to not treat it as the only metric that counts because it doesn’t necessarily represent retention or other aspects where the money is made or lost.

Continue reading “From getting started to getting finished”