Tips for conducting documentation research on the cheap

In my previous post, I presented some experiences with testing and the resulting epiphanies. In this post, I talk more about the process I applied.

The process is simple, yet that’s what makes it difficult. The key to success is to take it slow.

The question

Start with something simple (and then simplify it). Your first questions will invariably be too big to answer all at once, so think, “baby steps.”

Instead of asking, “How can we improve our documents?” I asked, “What do users think of our table of contents (ToC)?” Most users don’t care about how we can improve our docs, unless they’re annoyingly bad, so they don’t give it much thought. They do, as we found out, use the ToC, and we learned that it wasn’t in a way that we could count.

The sample

Whoever you can get to sit with you. Try to ask people who are close to your target audience, if you can, but anyone who is not you or in your group is better than you when it comes to helping you learn things that will help you answer your question.

The process

Listen with a curious mind. After coming up with an answerable question, this is the next hardest thing to do—especially if people are reviewing something that you had a hand in writing or making.

Your participants will invariably misinterpret things and miss the “obvious.” You’ll need to suffer through this without [too much] prompting or cringing. Just remind yourself those moments are where the learning and discovery happen (after the injuries to egos and knees heal, anyway).

When the participant asks for help, such as, “where’s the button or link to do ‘X’?” A trick I learned from more experienced usability testers is to ask them, “where do you think it should be?” That way you learn something about the user experience, rather than just finishing the task without learning anything. If they’re still stumped, you can help them along, but only after you’ve learned something. Remember, you’re there to learn.

Continue reading “Tips for conducting documentation research on the cheap”

Documentation research requires more curiosity than money

Sure, money helps, but success doesn’t always correlate with dollars spent.

Here are a couple of examples that come to mind from my experience.

piClinic research

My favorite research success story (perhaps because it turned out well) occurred while I was researching the piClinic project. While on a medical mission to a rural clinic in Honduras, I saw a mountain of paper patient records with a lot of seemingly valuable information in them that could never be tapped. Clearly (to me) computerizing those records would improve things. I felt that, based on my first-hand experience, automating record storage would make it easier to store and retrieve the patient records.

It would, and later, it did.

But…

When I later actually sat down and interviewed the target users and watched what they did during the day and, more importantly, during the month, I learned that what I thought was their biggest obstacle, storage and retrieval, was not really a problem for them.

It turned out that the real time-consumer in their process was reporting the data to the regional health offices from these documents. Each month, each clinic would spend 2-3 days doing nothing but tabulating the activity of the clinic in their reports—something I hadn’t seen for myself in my earlier, more limited, experiences.

My assumption that storage was the problem to solve died during that research. So, I pivoted the design of the piClinic app to focus on reporting (as well as the storage and retrieval necessary to support that) to reduce their monthly reporting time from days to minutes.

Continue reading “Documentation research requires more curiosity than money”

I love it when things just work

Bob Watson piloting a light plane on a sunny day as it approaches the runway to land

The image is a still frame from a video I pulled out of my archive to edit and an example of things just working–I’m on the final approach to a silky touchdown at Orcas Island airport.

In user experience parlance, they call that customer delight. I recently had some experiences as a customer that delighted me. It was amazing!

I hope that my readers get to experience similar delight when they read my docs. Let’s unpack these recent delights to see how they might help improve my writing.

The experiences

It really started with a recent disappointing purchase experience, but first some back story.

About 20 years ago, I used to edit videos, among other things. Back then, computers took a lot of tuning (i.e. money) to meet the processing demands of video editing and effects. After several software and hardware iterations, I finally had a system that had the industry standard software running on a computer that could keep up with the challenge of video editing.

With that, I could finally focus on the creative and productive side of editing without having to fuss with the computer all the time. It’s not that I minded fussing with the computer–after all, that’s what I had been doing all along to get to this state of functionality and reliability. Rather, I don’t like fussing with it when I have other things that I want to accomplish.

It was truly a delight to be able to focus on the creative and productive aspects of the job. Having reliable tools made it possible to achieve flow. If you’ve ever achieved that state, you know what I mean. If not, read Finding Flow: The Psychology Of Engagement With Everyday Life by Mihaly Csikszentmihalhi.

Fast forward to this past week.

I finally upgraded my very-consumer-y video editor (Pinnacle Studio) to edit some home videos. I’d used an earlier version a few years back and I recall it having a pretty low learning curve for what I wanted to do. But my version was getting stale, and they were having a sale, so…

I paid my money, got my download, and was ready for the delight to begin!

Not so fast. There would be no delight today.

Continue reading “I love it when things just work”

The documentation cliff

For the past couple of months, I’ve been refactoring the piClinic Console software to get it ready for this summer’s field tests. Along the way, I encountered something I’d seen before, but never really named, until recently.

The documentation cliff.

A documentation cliff is where you get used to a certain level of documentation quality and support as you embark on your customer journey to use a new API and then, somewhere along the way, you realize that level of support has disappeared. And, there you are, like Wile-E-Coyote, floating in air, looking back at the cliff and looking down at where you are about to fall in the next instant.

Just kidding. What really happens is that you realize that your earlier plans and schedule have just flown out the window and you need to refactor the remainder of your development plan. At the very least, it means you’re going to have some uncomfortable conversations with stakeholders. In the worst-case scenario, you might need to re-evaluate the product design (and then have some uncomfortable conversations).

Most recently, this happened to me while I was using Postman to build unit tests for the piClinic Console software. I don’t want this to sound like I don’t like Postman–quite, the contrary. I love it. But that just makes the fall from the cliff hurt that much more.

How I got to the cliff

In my case, the tool was easy to get started with, the examples and tutorials were great, the online information was helpful–all the things that made a very productive on-boarding experience. So, I on-boarded myself and integrated the product into my testing. In fact, I made it the centerpiece of my testing.

Continue reading “The documentation cliff”

Recent good, could-be-better, and ugly documentation experiences

During the “break” between semesters, I try to catch up on the tasks I’ve deferred during the past semester and get ahead of the tasks I know will come up during the coming semester. In the process, I’ve had these encounters with documentation that I’ve characterized into these categories:

  • GOOD: documentation doing what it should—making me(the customer) successful.
  • COULD-BE-BETTER: documentation that is well-intentioned, but needs a tweak or two.
  • UGLY: documentation that gives me nightmares.

Here goes.

Good documentation

These are the stories of documentation that made me successful. Being successful makes me happy. Good documentation should help make the reader successful.

Ford Motor Company. F-150 owner’s manual

The windshield wipers on my almost 3 year-old truck are the ones that it came with from the factory— almost 3 years ago. Well past their expiration (and effectiveness) date. But, while I’ve changed car engines and gear boxes before, I hate changing wipers. They always have some clever (and obscure) trick to getting the old ones off and the new ones off. I especially hate it when they go flying off the car in a rain storm. So, for all these reasons,I’ve procrastinated on changing them for far too long (as driving in recent wet weather has reminded me)—the replacements have actually been in my garage since…I actually can’t remember.

Continue reading “Recent good, could-be-better, and ugly documentation experiences”

Unqualified best practices are just slogans

I had the pleasure of joining Tom Johnson in another podcast and one of the topics we touched on was that of so-called best practices. Today, I stumbled across this post in a thread about high-tech job interviews:

On a personal note, it was actually a series of such experiences that convinced me to take my current job in academia.

One of the replies linked to this post: Best practices considered harmfull [sic] which summed it up as “Work out what your best practice is, work out how you can improve yourself.”

Unfortunately, by the time I got to this point in my feed, my blog reflex had been triggered, and here we are.

If we have to find our own best practices, what’s the point of having best practices?

Good question.

Continue reading “Unqualified best practices are just slogans”

Best practice…for you?

Last week, I saw a post in LinkedIn about a “new” finding (from 2012) that “New research shows correlation between difficult to read fonts and content recall.” First, kudos for not confusing correlation and causation (although, the study was experimental and did prove a causal relationship), but the source article demonstrates an example of inappropriate generalization. To the point of this post, it also underscores the context-sensitive nature of content and how similar advice and best-practices should be tested in each specific context.

Hard to read, easy to recall?

The LinkedIn post refers to an article in the March 2012 issue of the Harvard Business Review. The HBR article starts out overgeneralizing by summarizing the finding of a small experiment as, “People recall what they’ve read better when it’s printed in smaller, less legible type.” This research was also picked up by Malcolm Gladwell’s David and Goliath, which has the effect of making it almost as true as the law of gravity.

Towards the end of the HBR article, the researcher tries to rein in the overgeneralizations by saying (emphasis mine), “Much of our research was done at a high-performing high school…It’s not clear how generalizable our findings are to low-performing schools or unmotivated students. …or perhaps people who are not even students? Again, kudos for trying. Further complicating the finding stated by the HBR article is that the study’s findings have not been reliably replicated in subsequent studies, other populations, or larger groups. I’m not discounting the researcher’s efforts, in fact, I agree with his observation that the conclusions don’t seem to be generalizable beyond the experiment’s scope.

Context is a high-order bit

All this reinforces the notion that when studying content and communication, context is a high-order bit1. As a high-order bit, ignoring it can have profound implications on the results. Any “best practice” or otherwise generalized advice should not be considered without including its contexts: the context in which it was derived and the context into which it will be applied.

This also reinforces the need to design content for testing–and to then test and analyze it.



1. In binary numbers, a high-order bit influences the result more than any and all of the other lower-order bits put together.

Studies show…

folding-map-360382_640In the quest to do more with less, one method I’ve seen used to get the job done more quickly is to rely on best practices and studies. It’s not that referring to best practices or studies is bad, but they should be the starting point for decisions, not the final word. Why?

Your context is unique

This was made obvious in my dissertation study in which the effect seen by applying best practices or not depended on what was being measured (i.e. what mattered). In the API reference topics I studied, whether using headings and design elements as suggested by the prevailing best practices or not made no difference in reading performance, but they made a significant difference in how the topics were perceived by the reader.

Those results applied to the context of my experiment and they might apply to other, similar contexts, but you’d have to test them to know for sure. Does it matter? You tell me. That, too, depends on the context.

A study showed…

A report on the effect that design variations had on a news site home page came out recently to show how a modern interface had better engagement than a more traditional, image+text interface. However, reader of the latter interface had better comprehension of the articles presented.

Since it relates design, comprehension, and engagement, I thought it was quite interesting. I skimmed the actual study, which seemed reasonable. I’m preparing myself, however, for what the provocative nature of the headline in the blog article is likely to produce.–the time when it will be used in the inevitable “studies show…” argument. It has all the components of great “studies show” ammo: it refers to modern design (timely), has mixed results (so you can quote the result that suits the argument), it has a catchy headline (so you don’t need to read the article or the report).

Remember, your context is unique

Starting from other studies and “best practices” is great. But, because your context is, invariably, unique, you’ll still need to test and validate.

When it comes to best practices and applying other studies, trust but verify.