Documentation research requires more curiosity than money

Sure, money helps, but success doesn’t always correlate with dollars spent.

Here are a couple of examples that come to mind from my experience.

piClinic research

My favorite research success story (perhaps because it turned out well) occurred while I was researching the piClinic project. While on a medical mission to a rural clinic in Honduras, I saw a mountain of paper patient records with a lot of seemingly valuable information in them that could never be tapped. Clearly (to me) computerizing those records would improve things. I felt that, based on my first-hand experience, automating record storage would make it easier to store and retrieve the patient records.

It would, and later, it did.

But…

When I later actually sat down and interviewed the target users and watched what they did during the day and, more importantly, during the month, I learned that what I thought was their biggest obstacle, storage and retrieval, was not really a problem for them.

It turned out that the real time-consumer in their process was reporting the data to the regional health offices from these documents. Each month, each clinic would spend 2-3 days doing nothing but tabulating the activity of the clinic in their reports—something I hadn’t seen for myself in my earlier, more limited, experiences.

My assumption that storage was the problem to solve died during that research. So, I pivoted the design of the piClinic app to focus on reporting (as well as the storage and retrieval necessary to support that) to reduce their monthly reporting time from days to minutes.

Had I not researched the clinic’s use cases from a different perspective, I would have missed that important detail and be telling a different story.

Cost to learn this (including airfare and lodging): about $1,500. Value: the success of the project.

The invisible table of contents

In tech writing, I’ve had similar epiphanies by working with users. My favorite was around the utility of the table of content (ToC) in our doc set. We writers were debating the utility of the ToC and wondering if we should instrument the ToC links to track their use. Our implicit assumption was that “clicks = use.” I was on board for this until I talked to a new user in an informal usability test, which means that I sat with him and watched for a while as he used the documentation.

After about 10 minutes into the session, I noticed that he never clicked on the ToC. Instead, he used search and embedded links to navigate through the topics. I asked him to tell me about the ToC and if it was useful. His response surprised me. He told me that he used the ToC as a reference to help him understand the relationship between the topics, but never really used it to navigate.

In this case, this aspect of the ToC’s utility couldn’t be measured in “clicks.” Based on this, we saved ourselves some frustration and gave up on instrumenting the ToC—it would have been a lot of work to add that ability and it wouldn’t have measured what was important to the users.

Cost to learn this: 60 minutes of time with a new engineer. Value: the time we saved by not developing the ability to do something that, at best, wouldn’t help us, and, at worst, possibly lead us astray.

What these have in common

While these were two different situations with two different audiences, the research methods applied in both were low-cost and accessible: listening to the users.

A good test is worth a thousand expert opinions

Wernher von Braun

How can you have a good test on the cheap?

It really boils down to being able to abandon your assumptions and presumptions when presented with information and observations to their contrary—and actively seeking out that information and those observations.

But it’s not statistically significant!

In the piClinic example, I interviewed about 10 people in five different clinics. In the table of contents example, we interviewed only one person. Were these representative samples of the entire population? Not statistically, no. Did it matter? Not really.

These few samples showed us a view of the problem that we hadn’t seen before. It only took one or two examples to see that there was an angle we hadn’t considered. It didn’t matter that they weren’t statistically representative samples. From the perspectives these examples provided, we could direct the research in a new way. We then validated the new assumptions as we went to make sure we weren’t going to go astray in another direction.

It’s not about proving that your assumptions are correct.

It’s about seeing if your assumptions hold up to scrutiny by the customer. Trust me, setting out to only prove your assumptions are correct is a slippery slope that can be hard to avoid. You have to always be ready to kill your darling [idea]s (various) in service to your reader or customer.

You’ll also need to work testing into your process—its value accumulates with repetition. Asking the right questions is difficult, so be ready to practice it.

Leave a Reply