More docs on API docs

I’ve tabulated my entire collection of docs on API docs for your perusing pleasure in my API documentation bibliography. The documents for which I could find links to online copies have links. The rest have restricted access. Fortunately (and somewhat surprisingly) only 14 (12.5%) of the 112 articles I’ve found so far are restricted.

The articles are sorted by year published and title so the most recent publications will appear at the top. I’ll check in every once in a while to see what’s new in the field and update the list as necessary.

I’ve reviewed each of these to varying degrees of detail. Some are, admittedly, very detailed and read like a functional software specification (i.e. as dry as the Atacama), so in those, I read just enough to pull out the classification details.

I didn’t plan this (nor was I expecting it), but the author-affiliations of all these articles break down as:

  • 47.3% of the articles were written by academic authors
  • 42.0% of the articles were written by industry authors
  • 8.9% of the articles were written by a mix (collaboration?) of industry and academic authors

Continue reading “More docs on API docs”

How to read survey data

As it gets closer to our (American) mid-term elections, we’re about to be inundated with surveys and polls. But, even between elections, surveys are everywhere, for better or worse.

To help filter the signals from the noise, here is my list of tips for critically reading reports based on survey data that I’ve collected over the years.

If you’re a reader of survey data, use these tips to help you interpret survey data you see in the future.

If you’re publishing survey data, be sure to consider these as well, especially if your readers have read this post.

To critically read survey data, you need to know:

  1. Who was surveyed and how
  2. What they were asked
  3. How the results are stated

Let’s look at  each of these a bit more…

Continue reading “How to read survey data”

More articles on API documentation

I’ve just collected some more articles for my bibliography of API documentation-related articles and the trend I saw earlier this year hasn’t changed much. In all fairness, eight months is probably not enough time to see a change given the pace of academic publishing. I now have 114 articles in my list of API documentation-related topics.

114!

Searching for API DOCUMENTATION produces a lot of hits on actual API documentation (good news: there’s a lot of API Documentation out there!) Searching for Writing API Documentation produces more articles relevant to what I’m looking for. I’ve also merged my academic and non-academic API documentation bibliographic data, so I can compare and contrast them together.

The merged lists have these characteristics:

114 articles! and I know I haven’t found them all, yet.

  • 71% (81/114) of the articles are from CS-oriented sources
  • 29% (33/114) are from TC-oriented sources
  • 81% of the CS-oriented articles are from edited publications (books, journals)
  • 27% of the TC-oriented articles are from edited pubs.
  • 27% (31/114) of the API documentation articles were published in 2017

So, what does this mean?

Continue reading “More articles on API documentation”

New articles on API documentation

These are some the new articles I found while browsing Google Scholar on the subject of API documentation since my last update on the topic. I found about 20-some new articles since January, which is exciting! Here’s a review of just the ones I’ve had a chance to skim and write up, so far.

This recent trove of documents makes me happy and sad as did the papers I reviewed in an earlier post. These articles contain lots of great research into API documentation, who uses it, and how it’s used. All of these are available for download and I would encourage you to do that if you are interested in API documentation.

At the same time, I’m still disappointed that there has been absolutely zero research published on the topic from the technical communication community this year (was I really the only one who was doing that lately?!) and little to no reference to anything tech-comm related in these papers. If you know of some API research published in a tech-comm venue (and that’s not already in my list), please let me know. At this point, all this is just a warning that some of my disappointment might seep through into my article reviews.

Many of these articles (3/4) had both academic and industry authors, which suggests that industry-academy partnerships aren’t that unusual in the research of API Documentation—if you’re in computer science. In tech comm, as Tom Johnson’s recent survey laments, not so much.

Some of these articles cite earlier research in developer user studies and also contribute to that body of work. None of them, however, cite what I would call writing or reading research. That wouldn’t be so troubling, if it were just these articles, but I can’t think of an article on the topic of API documentation published in a computer science venue that talks about actually reading and using the content. At the same time, there now seems to be a recognizable genre in API documentation literature that Head et al. describes as, “finding anti-patterns in documentation” (see reviews below).

So, here’s a short review of four of the articles I read this morning.

Continue reading “New articles on API documentation”

The first piClinic articles have gone live!

Life’s a beach!

Although I’ve been in the field conducting research for the past month (in places, such as depicted in the photo), I still managed to publish and “present” several research papers that have to do with the piClinic Console. More are still in the pipeline, so stay tuned…

In Using Independent Studies to Enhance Usability Assessment Skills in a Generalist Program, co-authored with Dr. Pam Estes Brewer, Associate Professor in my department, we talk about how we used the development of the piClinic Console as an independent-study project for one of her usability research students. Dr. Brewer presented this paper July 23 at the IEEE PROCOMM conference in Toronto. The short story is the project provided an excellent challenge for her student and her student provided vital usability research data that informed the design’s iterations throughout the year.  The paper also describes some of the other projects we’ve used to help develop future usability researchers.

Enriching Technical Communication Education: Collaborating Across Disciplines and Cultures to Develop the piClinic Console, was just presented in Milwaukee, WI at the 36th ACM International Conference on the Design of Communication. Instead of a personal appearance, I sent them this video to present in my stead. The paper details the design process and how it was applied to our technical communication curriculum. For example, as the usability research independent study project described in the preceding paper. Other tech comm lessons the project has produced include some visual UI design, the production of the promotional video that appears on piclinic.org, and several projects for the computer engineering department. The video, on the other hand, provides some of the back story behind the project.

For more interesting articles, see the complete list of my publications.

What’s a piClinic Console?

Image of piClinic Console prototype which consists of a monitor, keyboard, and mouse
piClinic Console prototype

So, I’m taking a break from technical writing related thoughts and posts for a while. For the next two months, I’ll be focusing on the development of, I suppose you could call it, my side project for the past few years. While, I’ve been working on this for the past four years, or so, it’s starting to gain some momentum.

The piClinic Console.

The project’s website, http://piclinic.org, says,

“The piClinic is an open-source, patient-record automation solution designed for limited-resource healthcare clinics around the world. piClinic systems fill the gap between a paper-based patient-record system and a complete Electronic Health Record (EHR) system at a very low cost per system. The piClinic is built on the digital principles to provide an accessible, sustainable, and low-cost solution.”

No, seriously, what the heck is a piClinic Console?

Here’s the story.

Continue reading “What’s a piClinic Console?”

Unqualified best practices are just slogans

I had the pleasure of joining Tom Johnson in another podcast and one of the topics we touched on was that of so-called best practices. Today, I stumbled across this post in a thread about high-tech job interviews:

On a personal note, it was actually a series of such experiences that convinced me to take my current job in academia.

One of the replies linked to this post: Best practices considered harmfull [sic] which summed it up as “Work out what your best practice is, work out how you can improve yourself.”

Unfortunately, by the time I got to this point in my feed, my blog reflex had been triggered, and here we are.

If we have to find our own best practices, what’s the point of having best practices?

Good question.

Continue reading “Unqualified best practices are just slogans”

Putting some wind beneath my wings

This week, my flying experiences spanned a broad spectrum. For the newcomers, I’m licensed to fly single and multi-engine airplanes, but lately, I’ve been flying airplanes with only one fan to keep the pilot cool (i.e. single-engine airplanes). However, this week, I had some new experiences.

Boeing 737-200 Simulator

A couple of days ago, I visited the Delta Flight Museum at Atlanta’s Hartsfield-Jackson International Airport to fly their Boeing 737-200 flight simulator. Simulating an airplane that weighs over 100,000 pounds at takeoff, the Boeing 737-200 simulator represents the second biggest aircraft I’ve flown (simulated or otherwise). In college, I flew the Boeing 747-200 simulator with the flying club and the 747-200 weighs eight times as much as the 737-200.

Continue reading “Putting some wind beneath my wings”

Yes, YOU can write docs as cool as Twilio’s

After attending Write the Docs 2018 (Portland, edition) and watching a YouTube video, I’ve got it figured out: How you (i.e. anyone) can create documentation as cool as Twilio’s–a consistent entry in lists of the best API documentation examples.

All you need to do is follow these six steps:

  1. Think, plan, and work iteratively.
  2. Infuse user research and analytics into your entire writing process.
  3. Treat your documentation as a product.
  4. Do what’s best for your documentation customer.
  5. Create and support a writing process that enables contributions from outside of the writing team.
  6. Build the CMS that supports all of the above.

That’s it!

You’ll need to understand that it took them a few years to get to where they are, today. And, it sounds like they’ll keep moving the bar.

But, don’t be discouraged. The sooner you start, the sooner those three to five years will be behind you.

A little background

Twilio’s Kat King opened Write the Docs 2018 in Portland this May, starting the conference off with a strong sense of user testing and research. That always gets my attention at a tech-writing conference. The next session by Jen Lambourne continued by describing how she applied user research on documentation. I’ve been going to tech writing conferences for quite a while and I can’t recall going to one with such a user-research focus.

I’ll make references to these videos, in what follows in case you want to go straight to the source (something I always encourage). The practices they describe, however, are straight out of user-centered-design. You can also see some of my favorite books on the topic for more background.

What’s noteworthy here is that the Twilio team has applied them successfully to their documentation–proving, if nothing else, that documentation works like any other product (and perhaps works best when treated as one).

So, here’s how the steps work:

Continue reading “Yes, YOU can write docs as cool as Twilio’s”

Tech comm education and the job market

One of the many informative experiences I had at this year’s WriteTheDocs 2018 conference was my chat with employers/recruiters at the Job Fair. My primary goal was to learn what they looked for in their technical communication interns and entry level recruits, so I could bring that back to school. What I learned was not surprising, having only recently left industry for academia, but it was a wake-up call.

Recruiters want people (even their interns) who have experience and they want current and relevant experience.

I agree, that’s a bit of a Dog bites man story. Of course they do. All other things being equal, having current and relevant experience is almost always better than not. I suspect it is because the assumption is that with it, the person will become a productive team member sooner than later. What I think is often overlooked is how other idiosyncratic aspects of the specific job might have a greater influence in the time to productivity, such as the onboarding support (discussed in Sarah Day’s talk at this year’s conference), but that’s not something that I, as an educator, have much control over (unless you’re hiring me to consult on the process, of course).

What an intern needs

One of the things I have to keep in mind when reviewing my notes is that I was at a software documentation conference and the companies I talked to were software companies with open-source software products to document. In that context, it was natural that they were looking for writers with experience in their product domains: open-source tool experience (Git, GitHub, Markdown, etc.) They also looked for a passion for documentation—as evidenced by experience in such tools (e.g. documenting an open-source project in GitHub). I don’t know how many of these observations transfer to looking for writers in other industries.

Continue reading “Tech comm education and the job market”