How to read survey data

As it gets closer to our (American) mid-term elections, we’re about to be inundated with surveys and polls. But, even between elections, surveys are everywhere, for better or worse.

To help filter the signals from the noise, here is my list of tips for critically reading reports based on survey data that I’ve collected over the years.

If you’re a reader of survey data, use these tips to help you interpret survey data you see in the future.

If you’re publishing survey data, be sure to consider these as well, especially if your readers have read this post.

To critically read survey data, you need to know:

  1. Who was surveyed and how
  2. What they were asked
  3. How the results are stated

Let’s look at  each of these a bit more…

Continue reading “How to read survey data”

Collecting feedback about your documentation

KC-135R Engine Instruments showing various indications during testing
Lots o’ data

In my thread on user interactions with documentation, I suggested that you want your measurement instrument, usually some form of survey question(s), to influence the experience you’re trying to measure as little as possible. From a measurement standpoint, that’s nothing new. You never want your measurements to influence that which you’re measuring to prevent contaminating your measurement.

In the case of measuring feedback, or in this case, documentation or experience feedback, a recent tweet by Nate Silver (@NateSilver538) described how the use and perceived use of the measurement system had contaminated the data collected by Uber/Lyft (and countless other services).  He tweeted, “Given that an equilibrium has emerged where any rating lower than 5 stars means your Uber/Lyft driver was bad, they should probably just replace the 5-star scale with a simple thumbs-up/thumbs-down.

Continue reading “Collecting feedback about your documentation”

The right tool to measure your content’s performance

The past few days, I read a couple of articles on content metrics from the blogosphere: one had promise but ultimately indulged in some analytic slight-of-hand, while another actually made me smile and its focus on an solid methodology gave me hope.

Why is a solid methodology important? It’s the basis of your reputation and credibility. It’s the difference between knowing and guessing. These two articles reflect two examples of this.

First, the one that had a sound approach, but some flawed measurement methods.

How metrics help us measure Help Center effectiveness

This article has promise, but trips and falls before the finish line.  To its credit, it recommends:

  1. Setting goals and asking questions
  2. Collecting data
  3. Reviewing the data
  4. Reviewing goals and  going back to #2
  5. Lather, rinse, repeat  (as the shampoo suggests)

As a general outline, this is as good as they come, but the devil is in the details. If you’re in a hurry, just skip to the end or you can…

Continue reading “The right tool to measure your content’s performance”

Why I’m passionate about content metrics

…and why return on investment (ROI) is not the best metric to use for measuring content value.

I’ve been ruminating on the topic of content value (more than I usually do) since Tom Johnson published his essay on technical communication value last month. I’ve studied measurement, experimentation, and statistical process control in applications other than writing and I have also seen them repeatedly and painfully applied to writing. The results have been, almost without exception, anywhere from disappointing to destructive. Until this morning, I haven’t been able to articulate why. Thankfully, a post in Medium by Alan Cooper provided the shift in perspective to help me bring everything into focus. I’m passionate about this because it’s the new view of tech comm that is needed for the 21st century.

Writing is a process, but it’s not a manufacturing process

Alan Cooper’s post, titled “ROI Does Not Apply,” describes how “ROI is an industrial age term, applicable to companies that manufacture things in factories.” Writing is not a manufacturing process. Granted, there are some manufacturing-like elements of the writing process as you get closer to publishing the content—and honestly, making that part of the process as commodity-like as possible has some benefits. But, writers add the most value to the content where the process is complicated and non-linear: towards the beginning, where concepts, notions, and ideas are brought together to become a serial string of words. That’s were ROI and other productivity measures are inappropriate—they don’t measure what’s important to adding value.
Continue reading “Why I’m passionate about content metrics”

Measuring Value for WriteTheDocs Boulder

Presentation for WriteTheDocs-Boulder
Link to presentation given to WriteTheDocs-Boulder

I had the pleasure of joining the Boulder/Denver WriteTheDocumentarians at their meetup in Denver, last month. I presented a short talk on what I had learned about measuring the value of content that is typically produced by technical writing, which started an enlightening conversation with the group.

I’ve linked the slides and provided a brief narrative to go with them, here. Unfortunately, you had to be there to enjoy the conversation that followed–a good reason to not miss these events!

Measuring content value is a process, not a destination

For some, the idea of measuring the value of technical writing requires a shift in thinking. Measuring the value that content provides is just one step in the process of setting and evaluating content goals (see also Design Thinking). Without getting too philosophical, the first realization to make is that

You can’t measure value until you define what is valuable.

Value, however, can take many shapes, and different people in your organization will likely define value differently, especially when it comes to content.
Continue reading “Measuring Value for WriteTheDocs Boulder”

Measuring your technical content – What about…?

If you’ve been following the preceding posts on measuring content, as the use-cases and customer journey paths start to become less funnel-shaped, this is about the point where whataboutism starts to occur.

In the post on measuring Tutorials, for example, I assert that “the customer’s goal in reading a tutorial is to accomplish something outside of the web,” making detecting and measuring their success difficult to do from within the topic. While the definition of a tutorial might make that seem like a pretty clear goal, that doesn’t make it immune to whataboutism.

Whataboutism can enter the discussion at this point in the form of “What about the people who come to the tutorial topic looking for a code sample to copy and paste? They don’t want to learn anything.” Or, “What about the executive who looks at the tutorial to see if it addresses a particular issue they care about?” Or, what about…  You get the idea. From what I’ve seen, it’s easiest for whataboutism to enter the discussion when the goals are broad and vague and the data supporting the goals and their subsequent measurement are scarce.

(Does that sound like a content project to you?)

So, what can you do about the “what about…” cases?

Continue reading “Measuring your technical content – What about…?”

Measuring your technical content – Part 3

In my recent posts, Measuring your technical content – Part 1 and Measuring your technical content – Part 2, I described some content goals and how those might be defined and measured for Introduction and Getting Started topics. In Part 1, the interaction was funnel shaped, while in Part 2, the funnel metaphor applied only in one of the two customer journeys through the page.

In this topic, I talk about Tutorials and How-To topics and how the funnel metaphor becomes less appropriate as customers goals move beyond the web experience.

It’s time to get creative (scientifically, of course).
Continue reading “Measuring your technical content – Part 3”

Measuring your technical content – Part 2

In my previous post, Measuring your technical content – Part 1, I described some content goals and how those might be defined and measured for an introduction topic. I this post, I look at Getting Started topics.

Getting Started topics

If Introduction topics bring people in by telling them how your product can help them, Getting Started topics are where you show them how that works. Readers who come here from the Introduction topic will want to see some credible evidence that backs up the claims made in the Introduction topic and these topics are where you can demonstrate that.

Technical readers will also use this as the entry point into the technology, so there are at least two customer journey paths intersecting here.

  • One path will come to a conclusion here, moving from the Introduction page to see the value and then the Getting Started topic to see how it works
  • Another path starts from the Getting Started page (already understanding the value proposition of the product) and moving deeper into the technology to apply it to their specific case.

Because at least one of the customer journeys through the Getting Started topics are less funnel-shaped than for the Introduction topics (some are almost inverted funnels), it’s important to start with the goals and required instrumentation before writing so that you can design your page to provide the information that the customer needs for their goals as well as the data you’ll need to evaluate the page (your goal).

So, in that case, what how might you measure such a topic’s success?

Continue reading “Measuring your technical content – Part 2”

Measuring your technical content – Part 1

This started out as a single post, but grew, and grew, and… Well, here’s the first installment.

After the last few posts, it would be easy to get the impression that I don’t like Google Analytics.

Not true.

I just don’t like when it’s treated like it’s the only tool that can collect data about a web site—especially a non-funnel (informational) web site.

In this collection of posts, I’ll look at my favorite topic, API documentation, and how you might analyze it in the context of the customers’ (readers’) journeys. This analysis doesn’t have to apply only to API documentation, because it’s based on customers’ goals, which are more universal and, if you look carefully, you might see a customer goal that matches some of your content.

So let’s start with the basic questions…

Continue reading “Measuring your technical content – Part 1”

Google Analytics just makes me sad

In my last post, I talk about how Google Analytics isn’t very helpful in providing meaningful data about technical or help content. It can’t answer questions like: Did it help the reader? Did they find it interesting? Could they accomplish their task/goal thanks to my content? What do readers really want? You know, simple questions like those.

While a little disappointing, that’s not what makes me sad.

What’s sad is that the charts on the dashboard have all the makings of dysfunctional communication. For example, the dashboard seems to tell me, “You’re not retaining readers over time.” But, it can’t, or it won’t, tell you why.

Awww, come on, gimme a hint?!

Continue reading “Google Analytics just makes me sad”