Measuring your technical content – Part 3

In my recent posts, Measuring your technical content – Part 1 and Measuring your technical content – Part 2, I described some content goals and how those might be defined and measured for Introduction and Getting Started topics. In Part 1, the interaction was funnel shaped, while in Part 2, the funnel metaphor applied only in one of the two customer journeys through the page.

In this topic, I talk about Tutorials and How-To topics and how the funnel metaphor becomes less appropriate as customers goals move beyond the web experience.

It’s time to get creative (scientifically, of course).

Measuring Tutorial & How-To topics

Tutorial topics, like Getting Started topics, are task-oriented in nature but they encompass a wider variety of tasks, and fewer variations in customer interaction paths. The customer’s goal in reading a Tutorial topic is to become able to accomplish a task outside of the topic, in some cases, outside of the web, completely.

At this point, it’s probably safe to assume the reader is committed to the product, at least to some degree, because they are now willing to spend some time learning detailed procedures that solve specific problems. However, because the customer’s success is more likely to occur outside of the web experience it will be difficult to track through the web site. Also, the business success of this content will be less immediate than other topics and, therefore, harder to track.

First, let’s review what success with tutorial topics looks like to the stakeholders.

  • Success factors
    • Customer are successful with a tutorial when:
      • They can apply the task described in the topic to their own app.
      • Yet, they are not successful when they finish reading the tutorial topic. Remember, their goal is to apply the tutorial, not just complete it.
    • The business is successful when customers apply the tutorial and:
      • The lesson does not result in a service call.
      • The customer successfully uses this feature (and the company gets paid for it).
  • Success indications
    • When a customer is successful with a tutorial, they will:
      • Be able to do the task in their app.
      • NOT call customer support for help.
    • The business is successful when:
      • Customers use the tutorial’s feature (if that actually helps the business).
      • Customers do not call customer support for help.
  • Measures
    • Customer success is measured by:
      • Satisfaction with using the tutorial’s lesson (more satisfied = better)
      • Related customer support calls (fewer = better)
    • Metrics that indicate business succes:
      • Customer satisfaction feedback (more satisfied = better)
      • Related customer support calls (fewer = better)
      • API accesses for online software (more accesses, lower error rate = better)

The customer has likely already bought the product, so unless the successful application of the tutorial topic brings the company more money, the success to the business is likely now only in the form of cost-avoidance. There might be some long-term rewards like repeat business or word-of-mouth referrals, but those are very difficult to trace back to a specific help topic.

Notice that their success isn’t measured by page views or other traditional web metrics. While customers must read the topics to be successful (or do they?), at this point in the customer’s journey, their success now occurs outside of the web experience and in their own apps.

Knowing how customers will interact with your topic can help you identify the best places and methods to collect feedback about their experience. And so enters the creativity mentioned earlier. The best way to find the answer depends on the question. Knowing these questions earlier in the design process can help you work with the product developers to identify ways that the product can support your information requirements. Some examples…

It might help writers improve their topics to know which actual successes relate to a tutorial topic views to measure topic effectiveness, but task completion questions could provide that information better (because they require the reader to read and apply the topic).

Another metric that could help writers is to know where readers drop out of the topic. Tutorials are typically step-by-step, so detecting how much of them are read (how many steps) can identify problems with a topic. This could be instrumented in a page in a variety of ways and tracked as a percentage of the whole page. The challenge to interpreting this type of measure is to know why the reader left before reaching the end. Knowing, for example, that 80% of the readers found the solution they needed after reading only 50% of the topic could be good news, while knowing that 80% of those who left before finishing were not satisfied would have the more negative implications.

Another action  to track if the task describes an API interaction is to watch the API’s activity for successes or errors. Relating these actions to a topic read can help identify the sources of (or the locations to fix) these errors. Knowing which customer accounts have read the topic and access the API can help correlate that.

Depending on the topic and the audience, you could sprinkle short feedback prompts about each step in the tutorial to get more detailed (but perhaps, less frequent) feedback on the topic and customers’ success.

When you know what you want to know, it’s much easier to find the answer.

Leave a Reply