Tips for conducting documentation research on the cheap

In my previous post, I presented some experiences with testing and the resulting epiphanies. In this post, I talk more about the process I applied.

The process is simple, yet that’s what makes it difficult. The key to success is to take it slow.

The question

Start with something simple (and then simplify it). Your first questions will invariably be too big to answer all at once, so think, “baby steps.”

Instead of asking, “How can we improve our documents?” I asked, “What do users think of our table of contents (ToC)?” Most users don’t care about how we can improve our docs, unless they’re annoyingly bad, so they don’t give it much thought. They do, as we found out, use the ToC, and we learned that it wasn’t in a way that we could count.

The sample

Whoever you can get to sit with you. Try to ask people who are close to your target audience, if you can, but anyone who is not you or in your group is better than you when it comes to helping you learn things that will help you answer your question.

The process

Listen with a curious mind. After coming up with an answerable question, this is the next hardest thing to do—especially if people are reviewing something that you had a hand in writing or making.

Your participants will invariably misinterpret things and miss the “obvious.” You’ll need to suffer through this without [too much] prompting or cringing. Just remind yourself those moments are where the learning and discovery happen (after the injuries to egos and knees heal, anyway).

When the participant asks for help, such as, “where’s the button or link to do ‘X’?” A trick I learned from more experienced usability testers is to ask them, “where do you think it should be?” That way you learn something about the user experience, rather than just finishing the task without learning anything. If they’re still stumped, you can help them along, but only after you’ve learned something. Remember, you’re there to learn.

Continue reading “Tips for conducting documentation research on the cheap”

Documentation research requires more curiosity than money

Sure, money helps, but success doesn’t always correlate with dollars spent.

Here are a couple of examples that come to mind from my experience.

piClinic research

My favorite research success story (perhaps because it turned out well) occurred while I was researching the piClinic project. While on a medical mission to a rural clinic in Honduras, I saw a mountain of paper patient records with a lot of seemingly valuable information in them that could never be tapped. Clearly (to me) computerizing those records would improve things. I felt that, based on my first-hand experience, automating record storage would make it easier to store and retrieve the patient records.

It would, and later, it did.

But…

When I later actually sat down and interviewed the target users and watched what they did during the day and, more importantly, during the month, I learned that what I thought was their biggest obstacle, storage and retrieval, was not really a problem for them.

It turned out that the real time-consumer in their process was reporting the data to the regional health offices from these documents. Each month, each clinic would spend 2-3 days doing nothing but tabulating the activity of the clinic in their reports—something I hadn’t seen for myself in my earlier, more limited, experiences.

My assumption that storage was the problem to solve died during that research. So, I pivoted the design of the piClinic app to focus on reporting (as well as the storage and retrieval necessary to support that) to reduce their monthly reporting time from days to minutes.

Continue reading “Documentation research requires more curiosity than money”

How to read survey data

As it gets closer to our (American) mid-term elections, we’re about to be inundated with surveys and polls. But, even between elections, surveys are everywhere, for better or worse.

To help filter the signals from the noise, here is my list of tips for critically reading reports based on survey data that I’ve collected over the years.

If you’re a reader of survey data, use these tips to help you interpret survey data you see in the future.

If you’re publishing survey data, be sure to consider these as well, especially if your readers have read this post.

To critically read survey data, you need to know:

  1. Who was surveyed and how
  2. What they were asked
  3. How the results are stated

Let’s look at  each of these a bit more…

Continue reading “How to read survey data”

The first piClinic articles have gone live!

Life’s a beach!

Although I’ve been in the field conducting research for the past month (in places, such as depicted in the photo), I still managed to publish and “present” several research papers that have to do with the piClinic Console. More are still in the pipeline, so stay tuned…

In Using Independent Studies to Enhance Usability Assessment Skills in a Generalist Program, co-authored with Dr. Pam Estes Brewer, Associate Professor in my department, we talk about how we used the development of the piClinic Console as an independent-study project for one of her usability research students. Dr. Brewer presented this paper July 23 at the IEEE PROCOMM conference in Toronto. The short story is the project provided an excellent challenge for her student and her student provided vital usability research data that informed the design’s iterations throughout the year.  The paper also describes some of the other projects we’ve used to help develop future usability researchers.

Enriching Technical Communication Education: Collaborating Across Disciplines and Cultures to Develop the piClinic Console, was just presented in Milwaukee, WI at the 36th ACM International Conference on the Design of Communication. Instead of a personal appearance, I sent them this video to present in my stead. The paper details the design process and how it was applied to our technical communication curriculum. For example, as the usability research independent study project described in the preceding paper. Other tech comm lessons the project has produced include some visual UI design, the production of the promotional video that appears on piclinic.org, and several projects for the computer engineering department. The video, on the other hand, provides some of the back story behind the project.

For more interesting articles, see the complete list of my publications.

Yes, YOU can write docs as cool as Twilio’s

After attending Write the Docs 2018 (Portland, edition) and watching a YouTube video, I’ve got it figured out: How you (i.e. anyone) can create documentation as cool as Twilio’s–a consistent entry in lists of the best API documentation examples.

All you need to do is follow these six steps:

  1. Think, plan, and work iteratively.
  2. Infuse user research and analytics into your entire writing process.
  3. Treat your documentation as a product.
  4. Do what’s best for your documentation customer.
  5. Create and support a writing process that enables contributions from outside of the writing team.
  6. Build the CMS that supports all of the above.

That’s it!

You’ll need to understand that it took them a few years to get to where they are, today. And, it sounds like they’ll keep moving the bar.

But, don’t be discouraged. The sooner you start, the sooner those three to five years will be behind you.

A little background

Twilio’s Kat King opened Write the Docs 2018 in Portland this May, starting the conference off with a strong sense of user testing and research. That always gets my attention at a tech-writing conference. The next session by Jen Lambourne continued by describing how she applied user research on documentation. I’ve been going to tech writing conferences for quite a while and I can’t recall going to one with such a user-research focus.

I’ll make references to these videos, in what follows in case you want to go straight to the source (something I always encourage). The practices they describe, however, are straight out of user-centered-design. You can also see some of my favorite books on the topic for more background.

What’s noteworthy here is that the Twilio team has applied them successfully to their documentation–proving, if nothing else, that documentation works like any other product (and perhaps works best when treated as one).

So, here’s how the steps work:

Continue reading “Yes, YOU can write docs as cool as Twilio’s”

Best practice…for you?

Last week, I saw a post in LinkedIn about a “new” finding (from 2012) that “New research shows correlation between difficult to read fonts and content recall.” First, kudos for not confusing correlation and causation (although, the study was experimental and did prove a causal relationship), but the source article demonstrates an example of inappropriate generalization. To the point of this post, it also underscores the context-sensitive nature of content and how similar advice and best-practices should be tested in each specific context.

Hard to read, easy to recall?

The LinkedIn post refers to an article in the March 2012 issue of the Harvard Business Review. The HBR article starts out overgeneralizing by summarizing the finding of a small experiment as, “People recall what they’ve read better when it’s printed in smaller, less legible type.” This research was also picked up by Malcolm Gladwell’s David and Goliath, which has the effect of making it almost as true as the law of gravity.

Towards the end of the HBR article, the researcher tries to rein in the overgeneralizations by saying (emphasis mine), “Much of our research was done at a high-performing high school…It’s not clear how generalizable our findings are to low-performing schools or unmotivated students. …or perhaps people who are not even students? Again, kudos for trying. Further complicating the finding stated by the HBR article is that the study’s findings have not been reliably replicated in subsequent studies, other populations, or larger groups. I’m not discounting the researcher’s efforts, in fact, I agree with his observation that the conclusions don’t seem to be generalizable beyond the experiment’s scope.

Context is a high-order bit

All this reinforces the notion that when studying content and communication, context is a high-order bit1. As a high-order bit, ignoring it can have profound implications on the results. Any “best practice” or otherwise generalized advice should not be considered without including its contexts: the context in which it was derived and the context into which it will be applied.

This also reinforces the need to design content for testing–and to then test and analyze it.



1. In binary numbers, a high-order bit influences the result more than any and all of the other lower-order bits put together.

In-flight reading

Photo of in-flight reading material
The collection of in-flight reading material found in the seat back of a recent Alaska Airlines flight

I’m working on a paper for the HCII 2015 conference and thought of the reading material I saw on a recent airline flight. The paper discusses ways to identify the goals of different types of information-focused web content and how to measure how well the content is accomplishing those goals, so now I see this everywhere I look.

This is what occurred to me while I was staring at this collection of literature for several hours.

The Alaska Airlines magazine

It’s goal is to provide the reader with entertainment, er, I mean provide them with advertising, using entertainment as the bait. So how would you measure success? Interaction with the content, maybe? Coupon codes entered, products ordered from the web links, and other interactions traceable to the magazine. Pretty straightforward. Content can be compared to the corresponding advertisement, reader feedback, and the publisher can decide what’s working and what needs work.

The airsick bag

This isn’t really literature, but a good case of “if it’s not easy to use, it’s going to get messy.” I don’t think any amount of documentation could fix a poorly-designed air sickness bag.

The emergency procedures brochure

This is everyone’s favorite reading material, right? It’s goal is to provide important information and provide it in a way that’s easy to understand (quickly, in the worst-case scenario). This is a document that Alaska Airlines (and its passengers) hope to never need, but when it’s needed, it’s value will be immeasurable. How do you track that goal? User feedback? probably not. Survivor stories? Let’s hope not! Maybe usability testing?

The WiFi and the “Meals & Snacks” advertisements

Again, this is purely promotional material whose effectiveness can be tracked by sales. Like the magazine, this is not unfamiliar territory.

What’s this got to do with me?

As a writer of help content, I relate to the emergencies procedures brochure. Sometimes I don’t think anyone reads my content and frequently, Google analytics tends to agree with me. But, I know that in some cases, when those few people need to read it, that’s going to be the most important thing for them to read (if only for that moment). If I’ve done my job right, what I’ve written will save them time and money. I’ll never know that from looking at Google analytics, but, a usability test (even an informal or discount test) will tell me if a topic will be ready to save the day (moment) when it’s called upon to do so.

Back to the paper.

User research to the rescue!

A-Team-Logo As Hannibal of the A-Team would say, “I love it when a plan comes together!

I responded to this post in Quora from someone asking for advice on how to improve their website. There was lots of good, 1st-person, advice when I read it, so I suggested that they go to their target demographic ask them what they thought.

Have you tried usability testing it with some people from your target market? E.g. go to a local university with this [home page image] on a tablet, or even just a photo on a clipboard, and ask people a few questions:

a) What does this page inspire you to do? Tell me more about that. (after each question)
b) Where would you click?
c) What would you expect to find after you clicked?
d) Would you sign up for this? Why/why not?
e) What does “verified” mean to you?
f) Thank you! here’s a gift card for a coffee.

If you started after breakfast, I would imagine you’d have a much better sense of what you need to improve by lunch time.

(Read Quote of Bob Watson’s answer to What things can I improve on this home page? on Quora)

Later,  I read this comment to the post.

Bob, I  actually did usability testing today as you advised and got some real feedback from students. It really helped in understanding what would resonate with students.

I also want to do A/B testing to see which messaging converts better. Do you happen to know any resource (website)?

Thanks a lot for your feedback

Happy to help!