Measuring the value of technical writing

This topic has been discussed in technical writing circles for decades. It’s not so much as a discussion as it is folklore in the form of good advice that’s difficult to apply in practice and it’s been around for a long time—this 1995 article by Ginny Redish, “Adding value as a professional technical communicator[1] lists these ways to measure value:

  1. Outcome measures
  2. Ratings of customer satisfaction
  3. Projections (estimates) of value added
  4. General perceptions of the value of technical communicators’ work

I like how she starts with measuring the value added, because if you can’t measure it, then how do you know you have actually delivered it? Further, if you can’t measure it, how can you show improvement in that—(a) whatever it is, it’s now better and (b) it’s better because of a decision you implemented. That’s a trick question. You can’t. So, how can you measure this?

In her 1995 article, Redish continues to identify some of the more tangible (and measurable) ways technical writing can add value to the organization. She lists such things as:

  1. Cost savings
  2. Cost avoidance
  3. Adding value by attracting users

Her article ends by returning to the importance of getting credit for your efforts.
So. Mystery solved, right?

Unfortunately, no.

During the last 20+ years, stakeholders’ expectations for technical documentation have remained largely unchanged since 1995 because technical writers keep pitching the same value proposition. In the recent Infographic: Measuring the Value of Technical Communication, the top four values of technical communication (as reported by the technical communicators who responded to the survey) are:

  1. Increased customer satisfaction
  2. Reduced support costs
  3. Reduced training and development costs
  4. Reduced overall cost of information development

Yet, from that same survey, this was the most popular information those technical writers would like to be able to report.

  1. Increased customer satisfaction
  2. Reduced support costs
  3. Reduced training and development costs
  4. Reduced overall cost of information development

Wait! What? After saying these are the values of technical communication for years, how is it possible that we still can’t report them? That needs to change.

Time to start measuring

Where do we start? The best place to start is where you’re standing, which for technical communication, I think is in a pretty good spot. Unfortunately, we’ve set ourselves up for that to be an uncomfortable spot to stand.

Here’s the problem. Overall, let’s say in the 20+ years since the paper I cited earlier, technical writers, as a group, have been doing a good job at delivering the top four values listed above. I’d go so far to say, they have done an amazing job (but, I’m just a little biased).

Technical writers, on the whole, however, have failed miserably at tracking and reporting their value, but I’ll get to that in a minute. The net result is there really hasn’t been much of a need (a.k.a. a pain point) such that there’s a demand for them to measure their impact (there’s only a tiny bit of irony in there not being much ROI to measure ROI). As problems go, this is not that bad; but it has a downside that is twofold. First, in a data-driven world, no data is easily equated with no value (not a good place to be). Second, what if my assertion is true and technical writers have been delivering on their value proposition? In that case, if we measured the value to get a benchmark at this point, I think it’ll be a challenge to improve it.

Whether we’ve been delivering our value or not, it’ll be good to know for sure and even better to be able to demonstrate that value to stakeholders. In either case, it’s very important to manage expectations and use the correct tools to collect valid data.

I’m assuming that most technical writing cost/benefit relationships are somewhat geometric, if not asymptotic. Your data could vary (which is why you need to collect it to find out).

A chart that shows the geometric relationship between the cost to produce documentation and its resulting impact on satisfaction.
A sample cost/benefit graph for documentation

At some point, however, it will no longer be cost effective to keep shooting to improve some metric. At some point, you’ll want to sustain the metric. In no way, does that mean you should stop monitoring your process! It just means that, at some point, the effects of change will become less visible and at some point, it will be more productive to invest in improving other metrics. Further, and more valuable to ongoing operations, even if (or especially if) everything is running smoothly and efficiently, being able to monitor these values will let you know when they change. Knowing when a metric changes lets you know that it’s time to dig into it.

What are the questions and where are the answers?

Because I think technical writers are already delivering on the values promoted in the top four (and others), the problem I have with the stated value proposition, from a measurement sense, is with their wording more than their intent.

For example, “Increased customer satisfaction” would be a worthwhile goal if your customer satisfaction was in the dumps, but what if it’s already very good? How much could you increase it and how much would you be willing to spend to increase it, especially if you’re also managing the second goal of “Reduced support costs?” It doesn’t take an MBA to see that at some point, those goals would begin to work against each other. If you’re at or near that point when you start, you’ve set your stakeholders up for disappointment.

For tips on ideas for collecting reader satisfaction and feedback, take a look at Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites.

The same logic goes for the other two of the top four values. You could reduce training costs easily by cutting back on them, but you might also reduce quality and satisfaction. Are those still important? You could also reduce the cost of information development to zero by eliminating the department (I’ve seen that method used, before). Again, if you’re not also tracking customer-centered metrics, these are entirely possible outcomes.

To accommodate both the low and the high parts of the curve in the illustration, I’d rephrase the top four values as follows:

  • Increased customer satisfaction
    to: Achieving and maintaining high customer satisfaction.
  • Reduced support costs
    to: Achieving and maintaining low customer support costs.
  • Reduced training and development costs
    to: Maximize customer skill and utility with the product as economically as possible.

As far as the last point, Reduced overall cost of information development, goes, while it sounds like worthwhile goal, I’d avoid it. First, the first three points capture its spirit. Second, it reinforces the notion that technical writing is a service or cost center—one that takes value instead of adds value. When you’re a cost center, the only way to improve (or be recognized as improving) is by reducing costs—invariably YOUR cost. Is that what the company really wants or do they really want the value you add (but don’t realize it because you haven’t made it visible? While there are some aspects of a business where that a cost-center is appropriate; invariably, being seen as adding value is a much better option. The attraction of a cost-focus is that cost is easy to quantify, but as they say in statistics, not everything that can be counted, counts; and not everything that counts, can be counted [easily].

In conclusion

Technical writers add value but they need to make a better case to stakeholders than they have in the past. Further, technical writing tools need to make it easier for writers to measure (and demonstrate) the value of their work.

[1] Redish, J. G. (1995). Adding value as a professional technical communicator. Technical communication, 42(1), 26-39.

Leave a Reply