Measuring your technical content – What about…?

If you’ve been following the preceding posts on measuring content, as the use-cases and customer journey paths start to become less funnel-shaped, this is about the point where whataboutism starts to occur.

In the post on measuring Tutorials, for example, I assert that “the customer’s goal in reading a tutorial is to accomplish something outside of the web,” making detecting and measuring their success difficult to do from within the topic. While the definition of a tutorial might make that seem like a pretty clear goal, that doesn’t make it immune to whataboutism.

Whataboutism can enter the discussion at this point in the form of “What about the people who come to the tutorial topic looking for a code sample to copy and paste? They don’t want to learn anything.” Or, “What about the executive who looks at the tutorial to see if it addresses a particular issue they care about?” Or, what about…  You get the idea. From what I’ve seen, it’s easiest for whataboutism to enter the discussion when the goals are broad and vague and the data supporting the goals and their subsequent measurement are scarce.

(Does that sound like a content project to you?)

So, what can you do about the “what about…” cases?

How to know if you should care

Let’s assume that the whatabouts are honest concerns and they were not brought up simply to derail the discussion; although this technique should work in either case.

First, decide if this is the result of a fuzzy goal or shortage of data. Unfortunately, most of the whatabouts that I’ve seen are the result of both a fuzzy goal AND a shortage of data, which makes them particularly challenging, But try to determine which of those factors contributes most to the whatabout.

Clarifying fuzzy goals

If it’s the result of an unclear goal, ask yourself if you really care that some (as yet, unknown and possibly, non-existent) executive finds a random (to you) tutorial to be so inspiring that it closes the sale?  Is that the audience you want to optimize for? If those are actual goals of a tutorial topic, consider if meeting those goals distracts from the primary goal of teaching the tutorial’s subject. If so, you’ll have to prioritize. On the other hand, maybe the executive’s goal will be more influenced by the design of the topics than the actual content. In that case, addressing that probably won’t distract you, the writer, by requiring a change to the content. It could affect your site’s designer (or she already has it under control).

On the other hand, if that’s not a goal for the topic, then it’s a non issue. You can look at the goals, say, “that could happen, but that’s not a goal for this content type,” and move on to the next item.

Resolving a shortage of data

My assertion in the tutorial case is that tutorial topics are written to demonstrate and teach someone to do something somewhere else by definition.  My content should, therefore, be able to measure that accomplishment somehow. If there’s a whatabout that doesn’t have any data behind it, you could try to collect the data to determine if it’s an issue–if it’s worth the effort. If using tutorials as code-sample farms could be a popular use case–and one that we would want to encourage–we could instrument the page to capture those events. We could  then track the frequency of copy events and do things to make increase that frequency.

Or, we could decide that in the best-case scenario knowing this wouldn’t matter one way or the other. In which case, why bother? Maybe readers mine our content for snippets, or maybe they don’t. It could be that it’s not worth it to care because we have more valuable things to do.

This might sound harsh, but when you focus on goal you will include some things (those that align with and advance the goal) and exclude others. Asking, “Do you want to measure it?” is a good filter for deciding what’s important in that it adds a cost to the topic. Not every feature or question has the same value and when you have limited resources, you need to find a way to identify those questions and interests that are more valuable than others. Also, not every piece of data will provide information on which you can act, but it will take up time and resources.

A question creates the space in which to put the answer

A professor told me this–implying that until you have a question, you won’t really have any place to put the answer. This is my preferred approach to data collection–at the opposite end of the spectrum from the “let’s collect everything we can think of and we’ll figure it out later” approach. Neither is intrinsically better or worse, they are just different tools for solving different problems. For most analytic and KPI-type data, asking the question first is the best way to collect the data you need to actually answer the question.

My preferred approach is to define the goal of the content and, from that, the instrumentation by which to measure how well the content is performing.

For example, my introduction pages were designed to bring people into the site and convert them to new users–a funnel-shaped flow that Google Analytics could help with. Unfortunately, that’s usually only one page of the site, but it’s an important one! With that interaction in mind, I can design my introduction page to bring the reader into the site and send them to my goal for the page (whatever that is for the site). With that topic designed for the Analytics and vice versa, I should get some useful data about that page.

While the Getting Started page was part of a funnel-shaped interaction, it was also part of an inverted-funnel-shaped interaction. Knowing this, I can evaluate the analytic data to separate those paths out. (e.g. the funnel-shaped interaction could come from the Introduction page and the other interaction, not so much)

The “collect everything and shake something interesting out of it later” approach can provide some forensic review of your content (so it’s not completely without value) and it’s always good for a few stories. However, because it’s not goal-driven and has the potential for a lot of intervening and confounding variables to enter, demonstrating a credible, causal effect (i.e. showing the value of your content), is much harder.

It’s about demonstrating impact

In the end, it’s about delivering value and demonstrating the impact of that value.

Knowing that my tutorial is designed to teach the reader to do something elsewhere, I can design the page and the feedback to work within that framework. With that goal and that data, I can show, scientifically, that my content is (and by reflection, I am) adding value to the customer experience.

Stay focused!

Leave a Reply