In all the product documentation projects I’ve worked on, a good feedback response rate to our help content has been about 3-4 binary (yes/no) feedback responses per 10,000 page views. That’s 0.03% to 0.04% of page views. A typical response rate has often been more like half of that. Written feedback has typically been about 1/10 of that. A frequent complaint about such data is that it’s not statistically significant or that it’s not representative.
That might be true, but is it useful for decision making?
Time for a short story
Imagine someone standing on a busy street corner. They’re waiting for the light to change to cross the street. It’s taking forever and they’re losing patience. They decide to cross. The person next to them sees that they’re about to cross, taps them on the shoulder, and says, “the light’s still red and the traffic hasn’t stopped.” Our impatient pedestrian points out, “that’s just one person’s opinion,” and charges into the crossing traffic.
Our pedestrian was right. There were hundreds of other people who said nothing. Why would anyone listen to just that one voice? If this information were so important, wouldn’t others, perhaps even a representative sample of the population, have said something?
Not necessarily. The rest of the crowd probably didn’t give it any thought. They had other things on their mind at the time and, if they had given it any thought at all, they likely didn’t think anyone would even consider the idea of crossing against the traffic. The crossing traffic was obvious to everyone but our impatient pedestrian.
Our poor pedestrian was lucky that even one person thought to tell them about the traffic. Was that one piece of information representative of the population? We can’t know that from this story. Could it have been useful? Clearly.
Such is the case when you’re looking at sparse customer feedback, such as you likely get from your product documentation or support site.
A self-selected sample of 0.03% is likely to be quite biased and not representative of all the readers (the population).
What you should consider, however, is: does it matter if the data is representative of the population? Representative or not, it’s still data—it’s literally the voice of the customer.
Let’s take a closer look at it before we dismiss it.
Understanding the limits of your data
Let’s consider what that one person at the corner or that 0.03% of the page views tell us.
- They don’t tell us what the population thinks. By not being statistically representative, we can’t generalize such sparse data to make assumptions about the entire population.
- The do tell us what the they think. We might not know what the population thought, but we know that 0.03% thinks.
The key to working with data is to not go beyond its limits. We know that this sparse data tells us what 0.03% of the readers thought, so what can we do with that?
What can sparse data tell us?
A few scattered feedback responses might not tell us that it’s working or how great it is, but it can definitely tell us if it’s broken.
Usability studies generally sample an even smaller percentage of the population (typically 5-7 people, not percent) and, somehow, they seem to be very useful to product development. They’re useful because they’re good at identifying what’s broken. Usability studies with 5-7 participants can’t tell you if this will be a hit with the market, but they can identify problems that most people are likely to run into if they are left in the product. Those are exactly the problems you want to find before the product gets to the customer.
Back to our sparse customer feedback. Can that feedback tell us that the content is working? Not statistically, no. Can it tell us that something is broken? Definitely.
Putting sparse customer feedback to work for you
If we assume that the only thing this sparse data will tell us most accurately is where the problems are, we can use that assumption and focus on what that data does best: tell us where to look further. From there, it’s up to us to investigate.
Investigating sparse feedback could follow a process such as:
- Do you see the reported issue? Can you replicate it?
- Can you understand why it’s a problem?
- Under what assumptions or conditions does the reported problem occur?
Once you understand the observation from the feedback, you need to decide if it’s a problem you want to fix. That’s a separate decision. It depends on factors outside of the reported problem and considers aspects such as:
- What’s the impact of this problem? How many people could it effect? How expensive is it to the customer?
- What’s the cost of fixing it? Is it an easy fix? Is it part of a systematic issue? How expensive is it to you?
- What are the competing priorities to consider? What’s the opportunity cost of fixing this? What’s the opportunity cost of not fixing it?
The 800-pound gorilla in the room
This is fine, you say, but if the documentation feedback doesn’t tell me what the population thinks, how do I know if my documentation is doing what it should for the audience? After all, that’s why I’m collecting feedback!
The bad news is that you’re not likely to find the answer to whether product documentation is working for you, or the customer, in feedback on the product documentation. Most product documentation exists to support the customer experience of the product. Feedback on the product documentation might be influenced by what people don’t like about the product, but it’s not going to tell you what they like about the product (or the documentation) with much reliability.
As a consumer, I use the product help to get me out of trouble. If I’m using the product and not referring to help, the product is working and I’m probably satisfied. If I encounter a problem with the product, the faster I find the solution, the better my experience—with the product.
As a technical writer, my job is to write product documentation that will be there for the customer when they need it and it will help get them back to being delighted with the product. The sooner that the documentation I write can get customers back to enjoying the product, the better their experience—and the less likely they are to provide feedback on that documentation.
That’s a good thing.