Tips for conducting documentation research on the cheap

In my previous post, I presented some experiences with testing and the resulting epiphanies. In this post, I talk more about the process I applied.

The process is simple, yet that’s what makes it difficult. The key to success is to take it slow.

The question

Start with something simple (and then simplify it). Your first questions will invariably be too big to answer all at once, so think, “baby steps.”

Instead of asking, “How can we improve our documents?” I asked, “What do users think of our table of contents (ToC)?” Most users don’t care about how we can improve our docs, unless they’re annoyingly bad, so they don’t give it much thought. They do, as we found out, use the ToC, and we learned that it wasn’t in a way that we could count.

The sample

Whoever you can get to sit with you. Try to ask people who are close to your target audience, if you can, but anyone who is not you or in your group is better than you when it comes to helping you learn things that will help you answer your question.

The process

Listen with a curious mind. After coming up with an answerable question, this is the next hardest thing to do—especially if people are reviewing something that you had a hand in writing or making.

Your participants will invariably misinterpret things and miss the “obvious.” You’ll need to suffer through this without [too much] prompting or cringing. Just remind yourself those moments are where the learning and discovery happen (after the injuries to egos and knees heal, anyway).

When the participant asks for help, such as, “where’s the button or link to do ‘X’?” A trick I learned from more experienced usability testers is to ask them, “where do you think it should be?” That way you learn something about the user experience, rather than just finishing the task without learning anything. If they’re still stumped, you can help them along, but only after you’ve learned something. Remember, you’re there to learn.

Continue reading “Tips for conducting documentation research on the cheap”

Documentation research requires more curiosity than money

Sure, money helps, but success doesn’t always correlate with dollars spent.

Here are a couple of examples that come to mind from my experience.

piClinic research

My favorite research success story (perhaps because it turned out well) occurred while I was researching the piClinic project. While on a medical mission to a rural clinic in Honduras, I saw a mountain of paper patient records with a lot of seemingly valuable information in them that could never be tapped. Clearly (to me) computerizing those records would improve things. I felt that, based on my first-hand experience, automating record storage would make it easier to store and retrieve the patient records.

It would, and later, it did.

But…

When I later actually sat down and interviewed the target users and watched what they did during the day and, more importantly, during the month, I learned that what I thought was their biggest obstacle, storage and retrieval, was not really a problem for them.

It turned out that the real time-consumer in their process was reporting the data to the regional health offices from these documents. Each month, each clinic would spend 2-3 days doing nothing but tabulating the activity of the clinic in their reports—something I hadn’t seen for myself in my earlier, more limited, experiences.

My assumption that storage was the problem to solve died during that research. So, I pivoted the design of the piClinic app to focus on reporting (as well as the storage and retrieval necessary to support that) to reduce their monthly reporting time from days to minutes.

Continue reading “Documentation research requires more curiosity than money”

Proving and defending the value of technical writing, again

A red compact car with no tires or wheels propped up on bricks.

A couple of weeks ago, I responded to this post on LinkedIn in which Nick, the original poster, asked, as so many technical writers before him:

Does anyone have data from their industry, demonstrating why it’s important to have good documentation? I’m struggling to convince (some) product managers why we need to invest in this.
Thanks in advance!

Nick received lots of well-intentioned suggestions that could provide data and reason to support a response to the product manager. And then, I replied:

That’s not how documentation works.
Good documentation is what customers expect. Not having good docs, however, will cost you.
Maybe say, “let’s take the docs offline for a week and see what happens?” At the end of the week, you’ll have the data you need.

While my reply contains a dash of snark, it’s really the only way I could think of at the moment to shock the discussion back to something productive.

This type of prove your worth to me question isn’t really looking for data. It’s usually more to establish some sort of dominance or just to pick a fight (however politely). In the worst-case scenario, they’re looking for positions (other than theirs) to cut.

I find this question to be annoying, not just because I’ve been hearing this for decades, but because it presumes that documentation doesn’t have any worth until you prove it. The same question could be asked of the product manager: What data is there to demonstrate why we need good product management?

So, can we please move past the “why are you even here?” challenge? Can we assume, for the moment at least, that we’re all professionals and we’re all here to deliver the best value to the customer for the company?

Continue reading “Proving and defending the value of technical writing, again”

If it’s not statistically significant, is it useful?

A compressed view of traffic in downtown Seattle with cars, buses, and pedestrians from 1975

In all the product documentation projects I’ve worked on, a good feedback response rate to our help content has been about 3-4 binary (yes/no) feedback responses per 10,000 page views. That’s 0.03% to 0.04% of page views. A typical response rate has often been more like half of that. Written feedback has typically been about 1/10 of that. A frequent complaint about such data is that it’s not statistically significant or that it’s not representative.

That might be true, but is it useful for decision making?

Time for a short story

Imagine someone standing on a busy street corner. They’re waiting for the light to change to cross the street. It’s taking forever and they’re losing patience. They decide to cross. The person next to them sees that they’re about to cross, taps them on the shoulder, and says, “the light’s still red and the traffic hasn’t stopped.” Our impatient pedestrian points out, “that’s just one person’s opinion,” and charges into the crossing traffic.

Our pedestrian was right. There were hundreds of other people who said nothing. Why would anyone listen to just that one voice? If this information were so important, wouldn’t others, perhaps even a representative sample of the population, have said something?

Not necessarily. The rest of the crowd probably didn’t give it any thought. They had other things on their mind at the time and, if they had given it any thought at all, they likely didn’t think anyone would even consider the idea of crossing against the traffic. The crossing traffic was obvious to everyone but our impatient pedestrian.

Our poor pedestrian was lucky that even one person thought to tell them about the traffic. Was that one piece of information representative of the population? We can’t know that from this story. Could it have been useful? Clearly.

Such is the case when you’re looking at sparse customer feedback, such as you likely get from your product documentation or support site.

A self-selected sample of 0.03% is likely to be quite biased and not representative of all the readers (the population).

What you should consider, however, is: does it matter if the data is representative of the population? Representative or not, it’s still data—it’s literally the voice of the customer.

Let’s take a closer look at it before we dismiss it.

Understanding the limits of your data

Let’s consider what that one person at the corner or that 0.03% of the page views tell us.

  • They don’t tell us what the population thinks. By not being statistically representative, we can’t generalize such sparse data to make assumptions about the entire population.
  • The do tell us what the they think. We might not know what the population thought, but we know that 0.03% thinks.

The key to working with data is to not go beyond its limits. We know that this sparse data tells us what 0.03% of the readers thought, so what can we do with that?

Continue reading “If it’s not statistically significant, is it useful?”

You’ve tamed your analytics! Now what?

In my last post, I talked about How you can make sense of your site analytics. But once you make sense of them, what can you do with them?

Let’s say that you’ve applied that method and you can now tell the information from the noise, what’s next?

The goal of the method presented in the last post is mostly to separate the information from the noise so you can make information-based decisions as opposed to noise-based decisions.

There are a couple of things you’re ready to do.

  • Reduce the noise
  • Improve the signal

They’re not mutually exclusive, but you might find it easier to pick one at a time to work on.

Let’s talk about the noise, first.

Why is it noisy?

Recall this graph of my site’s 2020 page views.

Graph of DocsByDesign.com website traffic for 2020 showing a lot of variation.
DocsByDesign.com website traffic for 2020

During 2020, I only made one post about how I migrated my site to a self-hosted AWS server. Not a particularly compelling article but, it’s what I had to say at the time—and apparently all I really had to say for 2020.

Based on that, this is a graph of the traffic my site sees during the year while I ignore it. It’s a graph of the people who visit my site for whatever reason—and therein lies the noise. People, or at least the people who visited my site in 2020, visited for all kinds of reasons—all reasons but my tending to the site.

Let’s see if we can guess who these visitors might be. Here’s a table of my site’s ten most visited pages during 2020.

Continue reading “You’ve tamed your analytics! Now what?”

How you can make sense of your site analytics

If you’ve watched any of your website’s analytics, such as page views or unique visitors, you’ve probably seen something like this chart and wondered, what does that even mean?

Graph of DocsByDesign.com website traffic for 2020 showing a lot of variation.
DocsByDesign.com website traffic for 2020

I know that I have, and I studied this kind of stuff for my Ph.D. All this wiggly-squiggly! What’s going on?

I’ve seen this type of graph just about any time I’ve plotted website data for just about any developer doc site I’ve worked on, and I’ve wondered (and had management ask me), does this show anything we should be concerned about? For the longest time, I’ve always answered with a shrug of some sort.

But now, I think there might be a way to makes sense of this data.

Continue reading “How you can make sense of your site analytics”

Documentation metrics, again?!

A digital voltmeter being prepared to measure a web page.
If only this was all you needed to measure a web site.

To get back into the blogging habit, I thought I’d rummage through some of my earlier posts to see if there might be something I could recycle. (How does the saying go? Good designers copy, great designers steal?) Because the topic of documentation metrics came up recently at work, I thought I’d start there and see if I had said anything about that before.

It turns out, I’d written a post or two on the subject, so let the theft recycling commence!

In The answer is Google Analytics—what was the question? I talk about how the web interaction model that Google Analytics (GA) is optimized for comes up short for a lot of user assistance content. True confession: GA is wired up to this site, and in Google Analytics just makes me sad, I followed up with a summary about how that works for me. Hint: the title sort of gives that away.

The premise of conflicting models came from a paper that my Ph.D. advisor and I wrote about the challenges of collecting useful data about your non-funnel-oriented web content. I posted a blogified version of the published paper in the series on Readers goals. Basically, if the reader’s reason for reading an article is to accomplish a goal that’s outside of the documentation, it’ll be difficult to measure how the documentation helped that goal from inside of the documentation.

I was on a roll, four years ago, because, I continued with Measuring your technical content – Part 1, Part 2, and Part 3. Part 1 talks about asking the questions you want your analytics and analysis to answer and the instrumentation that can help make that happen. Parts 2 & 3 bring this exercise into a sharper focus by trying it out on getting started topics and tutorials, respectively. That thread addresses some of the challenges that you might encounter along the way in Measuring your technical content – What about…?

Continue reading “Documentation metrics, again?!”

How to read survey data

As it gets closer to our (American) mid-term elections, we’re about to be inundated with surveys and polls. But, even between elections, surveys are everywhere, for better or worse.

To help filter the signals from the noise, here is my list of tips for critically reading reports based on survey data that I’ve collected over the years.

If you’re a reader of survey data, use these tips to help you interpret survey data you see in the future.

If you’re publishing survey data, be sure to consider these as well, especially if your readers have read this post.

To critically read survey data, you need to know:

  1. Who was surveyed and how
  2. What they were asked
  3. How the results are stated

Let’s look at  each of these a bit more…

Continue reading “How to read survey data”

Collecting feedback about your documentation

KC-135R Engine Instruments showing various indications during testing
Lots o’ data

In my thread on user interactions with documentation, I suggested that you want your measurement instrument, usually some form of survey question(s), to influence the experience you’re trying to measure as little as possible. From a measurement standpoint, that’s nothing new. You never want your measurements to influence that which you’re measuring to prevent contaminating your measurement.

In the case of measuring feedback, or in this case, documentation or experience feedback, a recent tweet by Nate Silver (@NateSilver538) described how the use and perceived use of the measurement system had contaminated the data collected by Uber/Lyft (and countless other services).  He tweeted, “Given that an equilibrium has emerged where any rating lower than 5 stars means your Uber/Lyft driver was bad, they should probably just replace the 5-star scale with a simple thumbs-up/thumbs-down.

Continue reading “Collecting feedback about your documentation”

The right tool to measure your content’s performance

The past few days, I read a couple of articles on content metrics from the blogosphere: one had promise but ultimately indulged in some analytic slight-of-hand, while another actually made me smile and its focus on an solid methodology gave me hope.

Why is a solid methodology important? It’s the basis of your reputation and credibility. It’s the difference between knowing and guessing. These two articles reflect two examples of this.

First, the one that had a sound approach, but some flawed measurement methods.

How metrics help us measure Help Center effectiveness

This article has promise, but trips and falls before the finish line.  To its credit, it recommends:

  1. Setting goals and asking questions
  2. Collecting data
  3. Reviewing the data
  4. Reviewing goals and  going back to #2
  5. Lather, rinse, repeat  (as the shampoo suggests)

As a general outline, this is as good as they come, but the devil is in the details. If you’re in a hurry, just skip to the end or you can…

Continue reading “The right tool to measure your content’s performance”