Proving and defending the value of technical writing, again

A red compact car with no tires or wheels propped up on bricks.

A couple of weeks ago, I responded to this post on LinkedIn in which Nick, the original poster, asked, as so many technical writers before him:

Does anyone have data from their industry, demonstrating why it’s important to have good documentation? I’m struggling to convince (some) product managers why we need to invest in this.
Thanks in advance!

Nick received lots of well-intentioned suggestions that could provide data and reason to support a response to the product manager. And then, I replied:

That’s not how documentation works.
Good documentation is what customers expect. Not having good docs, however, will cost you.
Maybe say, “let’s take the docs offline for a week and see what happens?” At the end of the week, you’ll have the data you need.

While my reply contains a dash of snark, it’s really the only way I could think of at the moment to shock the discussion back to something productive.

This type of prove your worth to me question isn’t really looking for data. It’s usually more to establish some sort of dominance or just to pick a fight (however politely). In the worst-case scenario, they’re looking for positions (other than theirs) to cut.

I find this question to be annoying, not just because I’ve been hearing this for decades, but because it presumes that documentation doesn’t have any worth until you prove it. The same question could be asked of the product manager: What data is there to demonstrate why we need good product management?

So, can we please move past the “why are you even here?” challenge? Can we assume, for the moment at least, that we’re all professionals and we’re all here to deliver the best value to the customer for the company?

Continue reading “Proving and defending the value of technical writing, again”

How to not suffer the curse of knowledge

Photo of Rodin's sculpture of The Thinker (Le Penseur)

Wikipedia says that the curse of knowledge “is a cognitive bias that occurs when an individual, who is communicating with other individuals, assumes that they have the background knowledge to understand.”

I’ve suffered that curse on various occasions, but I think I might have a way to reduce its frequency.

Know your audience.

Thank you for visiting.

Just kidding. There’s more.

Knowing your audience is one of the first things we teach technical writers, but that advice doesn’t quite address the nuance required to vaccinate yourself against the curse of knowledge.

Here are few steps I’ve used.

Step 1. Empathize with your audience

It’s more than just knowing them; it’s understanding them in the context of reading your content. This interaction might be minor in your reader’s experience, but it’s the reason you’re writing technical documentation. It’s extremely helpful to understand your readers in the moments of their life in which they’re reading your documentation.

Know why they’ll be reading your documentation or even just a topic in your documentation. What brings them to that page? What’s their environment like? What pressures are they under? What are their immediate and long-term goals? What would they rather be doing instead of reading your doc?

The reality is that most readers would rather be doing almost anything else but reading technical documentation—so, how can you work with that (besides not writing it)?

Continue reading “How to not suffer the curse of knowledge”

Reporting documentation feedback and keeping it real

Chart showing a high correlation between Comp Sci PHDs and Arcade revenue

In my previous post, If it’s not statistically significant, is it useful? (and every grad-school class I taught statistics), I talked about staying within the limits of your data. By that, I mean not making statements that misrepresent what the data can support—basically, keeping it real.

Correlation is not causation

Perhaps the most common example of that is using correlation methods and statistics to make statements that imply causation. My favorite site for worst-case examples of correlations that would make for some curious assumptions of causation is Tyler Vigen’s Spurious Correlation site.

Here’s a fun example. This chart shows that the number of computer science doctorates awarded in the U.S. correlates quite highly with the total revenue generated by arcades from 2000 to 2009.

Chart showing a high correlation between Comp Sci PHDs and Arcade revenue
An example of the crazy correlations found at https://www.tylervigen.com/spurious-correlations

Does this chart say that computer science doctorates caused this revenue? No.

While it’s possible that computer science Ph.D. students contribute a lot of money to arcades or, perhaps, arcades were funding computer science Ph.D. students. The problem is that this chart, or more importantly, this type of comparison, can’t tell us whether either one is true or not. Based on this chart, to say that one of these factors is the cause of the other would be exceeding the limits of this chart.

Describe the data honestly

In my previous post, If it’s not statistically significant, is it useful?, I talk about how the sparse customer feedback in that example couldn’t represent the experience of all the people who looked at a page with a feedback prompt. The 0.03% feedback to page view rate and self-selection of who submitted feedback prevent generalization beyond the responses.

Let’s try an example

Imagine we have a site with the following data from the past year.

  • 1,000,000 page views
  • A feedback prompt on each page: “Did you find this page helpful?” with the possible answers (responses) being yes or no.
  • 120 (40%) yes responses
  • 180 (60%) no responses

What can we say about this data?

Continue reading “Reporting documentation feedback and keeping it real”

If it’s not statistically significant, is it useful?

A compressed view of traffic in downtown Seattle with cars, buses, and pedestrians from 1975

In all the product documentation projects I’ve worked on, a good feedback response rate to our help content has been about 3-4 binary (yes/no) feedback responses per 10,000 page views. That’s 0.03% to 0.04% of page views. A typical response rate has often been more like half of that. Written feedback has typically been about 1/10 of that. A frequent complaint about such data is that it’s not statistically significant or that it’s not representative.

That might be true, but is it useful for decision making?

Time for a short story

Imagine someone standing on a busy street corner. They’re waiting for the light to change to cross the street. It’s taking forever and they’re losing patience. They decide to cross. The person next to them sees that they’re about to cross, taps them on the shoulder, and says, “the light’s still red and the traffic hasn’t stopped.” Our impatient pedestrian points out, “that’s just one person’s opinion,” and charges into the crossing traffic.

Our pedestrian was right. There were hundreds of other people who said nothing. Why would anyone listen to just that one voice? If this information were so important, wouldn’t others, perhaps even a representative sample of the population, have said something?

Not necessarily. The rest of the crowd probably didn’t give it any thought. They had other things on their mind at the time and, if they had given it any thought at all, they likely didn’t think anyone would even consider the idea of crossing against the traffic. The crossing traffic was obvious to everyone but our impatient pedestrian.

Our poor pedestrian was lucky that even one person thought to tell them about the traffic. Was that one piece of information representative of the population? We can’t know that from this story. Could it have been useful? Clearly.

Such is the case when you’re looking at sparse customer feedback, such as you likely get from your product documentation or support site.

A self-selected sample of 0.03% is likely to be quite biased and not representative of all the readers (the population).

What you should consider, however, is: does it matter if the data is representative of the population? Representative or not, it’s still data—it’s literally the voice of the customer.

Let’s take a closer look at it before we dismiss it.

Understanding the limits of your data

Let’s consider what that one person at the corner or that 0.03% of the page views tell us.

  • They don’t tell us what the population thinks. By not being statistically representative, we can’t generalize such sparse data to make assumptions about the entire population.
  • The do tell us what the they think. We might not know what the population thought, but we know that 0.03% thinks.

The key to working with data is to not go beyond its limits. We know that this sparse data tells us what 0.03% of the readers thought, so what can we do with that?

Continue reading “If it’s not statistically significant, is it useful?”

You’ve tamed your analytics! Now what?

In my last post, I talked about How you can make sense of your site analytics. But once you make sense of them, what can you do with them?

Let’s say that you’ve applied that method and you can now tell the information from the noise, what’s next?

The goal of the method presented in the last post is mostly to separate the information from the noise so you can make information-based decisions as opposed to noise-based decisions.

There are a couple of things you’re ready to do.

  • Reduce the noise
  • Improve the signal

They’re not mutually exclusive, but you might find it easier to pick one at a time to work on.

Let’s talk about the noise, first.

Why is it noisy?

Recall this graph of my site’s 2020 page views.

Graph of DocsByDesign.com website traffic for 2020 showing a lot of variation.
DocsByDesign.com website traffic for 2020

During 2020, I only made one post about how I migrated my site to a self-hosted AWS server. Not a particularly compelling article but, it’s what I had to say at the time—and apparently all I really had to say for 2020.

Based on that, this is a graph of the traffic my site sees during the year while I ignore it. It’s a graph of the people who visit my site for whatever reason—and therein lies the noise. People, or at least the people who visited my site in 2020, visited for all kinds of reasons—all reasons but my tending to the site.

Let’s see if we can guess who these visitors might be. Here’s a table of my site’s ten most visited pages during 2020.

Continue reading “You’ve tamed your analytics! Now what?”

How you can make sense of your site analytics

If you’ve watched any of your website’s analytics, such as page views or unique visitors, you’ve probably seen something like this chart and wondered, what does that even mean?

Graph of DocsByDesign.com website traffic for 2020 showing a lot of variation.
DocsByDesign.com website traffic for 2020

I know that I have, and I studied this kind of stuff for my Ph.D. All this wiggly-squiggly! What’s going on?

I’ve seen this type of graph just about any time I’ve plotted website data for just about any developer doc site I’ve worked on, and I’ve wondered (and had management ask me), does this show anything we should be concerned about? For the longest time, I’ve always answered with a shrug of some sort.

But now, I think there might be a way to makes sense of this data.

Continue reading “How you can make sense of your site analytics”

I love it when things just work

Bob Watson piloting a light plane on a sunny day as it approaches the runway to land

The image is a still frame from a video I pulled out of my archive to edit and an example of things just working–I’m on the final approach to a silky touchdown at Orcas Island airport.

In user experience parlance, they call that customer delight. I recently had some experiences as a customer that delighted me. It was amazing!

I hope that my readers get to experience similar delight when they read my docs. Let’s unpack these recent delights to see how they might help improve my writing.

The experiences

It really started with a recent disappointing purchase experience, but first some back story.

About 20 years ago, I used to edit videos, among other things. Back then, computers took a lot of tuning (i.e. money) to meet the processing demands of video editing and effects. After several software and hardware iterations, I finally had a system that had the industry standard software running on a computer that could keep up with the challenge of video editing.

With that, I could finally focus on the creative and productive side of editing without having to fuss with the computer all the time. It’s not that I minded fussing with the computer–after all, that’s what I had been doing all along to get to this state of functionality and reliability. Rather, I don’t like fussing with it when I have other things that I want to accomplish.

It was truly a delight to be able to focus on the creative and productive aspects of the job. Having reliable tools made it possible to achieve flow. If you’ve ever achieved that state, you know what I mean. If not, read Finding Flow: The Psychology Of Engagement With Everyday Life by Mihaly Csikszentmihalhi.

Fast forward to this past week.

I finally upgraded my very-consumer-y video editor (Pinnacle Studio) to edit some home videos. I’d used an earlier version a few years back and I recall it having a pretty low learning curve for what I wanted to do. But my version was getting stale, and they were having a sale, so…

I paid my money, got my download, and was ready for the delight to begin!

Not so fast. There would be no delight today.

Continue reading “I love it when things just work”

Is there any more room for innovation in tech writing?

I was looking through some classic (i.e., old) examples of technical writing and noticed how the format of application programming interface (API) reference topics hasn’t really changed since these examples were published in 1988 and 1992.

Is this because there’s been no innovation in technical writing during the intervening 30-ish years or, perhaps, we’ve found something that works, so why change? Taking that a step further, if what worked 30 years ago still works, is there any more room for innovation in tech writing?

Here are a couple of examples that I found in my library of software documentation. (It’s a library of printed documentation so there’s not much in there from the 21st century.)

MS-DOS Encyclopedia (1988)

Reference topic from MS-DOC Encyclopedia

The first example of classic documentation is from the Microsoft MS-DOS Encyclopedia (Ray Duncan, Microsoft Press, 1988), a 1,570-page collection of everything you’d need to know about programming for MS-DOS (v3.2) in 1988.

It starts with how MS-DOS was originally developed, continues with conceptual overviews of the different operating system functions, how to create common applications and extensions to the operating system, and various reference topics, such as the interrupt example I included here. It’s a one-stop reference manual that would prepare any MS-DOS programmer or device-driver developer for successful coding experiences.

This 33-year-old encyclopedia presents information in a format that you can still see used today.

  • Overview
  • Conceptual content
  • How-to articles of common tasks
  • Reference topics on various aspects of the product
  • Cross references as make sense

The content in the example reference pages that I included also follows a format that is still seen today:

Continue reading “Is there any more room for innovation in tech writing?”

What do you think of our docs?

In most technical writer interviews I’ve had for an organization with public-facing docs, I’ve been asked, “Did you get a chance to read our docs,” which they invariably follow with, “So, what did you think of them?” How should you answer? Here’s what I’ve learned, the hard way.

The answer to the first one, should be a confident (and honest), “Yes!” For, hopefully, obvious reasons. When I’ve interviewed candidates, I have to admit that I wonder why anyone would answer no (and some have). I don’t expect a detailed content inventory but opening a web site and flipping through a couple of pages seem like the least a candidate who is interested in writing them could do—if for no other reason than to see what they’d be getting themselves into.

As to the second one, that has always felt like a bit of trap. For good reason! Here are some possibilities and the traps that lie within.

Their docs are great!

They might be. If so, that’s a good place to start and you should keep going. Saying, “they’re great!” and then smiling politely tells me you either didn’t look at them and don’t want to sound rude or you did but weren’t looking at them with a very critical eye.

What can you say if they really are impressive? Talk about what makes them great and be specific. Some things to talk about include (in no particular order):

  • The visual design: Is it attractive? Is it functional? Does it help you find the information on the page?
  • The content on the pages: Is it easy to read? Is there sufficient white space? Does it have illustrations? Does it have code samples? Can you understand it? …even if you’re not familiar with the topic?
  • The organization and navigation: Are they clear and helpful?
  • The performance: Is the site responsive?

These are some generic qualities that apply to almost any type of documentation. To get more specific, you’ll need to find out more about their audience and documentation goals.

Don’t assume you know what they’re trying to accomplish with their docs! (This is the mistake I used to make. Every time.)

Rather, this is where you turn the conversation back to them and have them describe things like their:

  • product goals
  • documentation goals
  • audience
  • product
  • competition
  • work/task scheduling

These are important aspects of the context in which the documents are produced and used, so you’ll want to know about them before continuing your critique.

The way to impress your interviewer with what you know is by the questions you ask as much, if not more than, the answers you give.

Continue reading “What do you think of our docs?”

Reflections as I begin my third time on jury duty

Diagram of an API as a gear with connection points

Today I met with my co-jurors who’ll be judging this year’s DevPortal awards nominees with me in the coming weeks. The entrants in this year’s showcase represent an impressive effort on the part of the nominees, so we have our work cut out for us. This is my third year on the jury and I’m looking forward to this year’s entries.

What struck me as we kicked off this year’s evaluation today was the 15 different award categories this year–up from last year’s eight categories–and how the presentation of APIs to the world has changed over the years. What impresses me every year is the innovation applied to make this presentation effective and engaging.

Pronovix hosts this event and they’ve modeled this year’s 15 categories around the Maturity model for devportals, which describes these three dimensions of developer portal maturity.

  • Developer experience
  • Business alignment
  • Operational maturity

When I judged the entries last year, I approached it from a usability perspective–how easily could the customer do what they needed to do? From that perspective, the maturity model dimensions represent the usability of the site from the perspectives of different stakeholders involved with an API developer’s portal.

From the perspective of ease of use, developer experience represents how easy the site makes it for the reader to get value from the product. Operational maturity represents how easy it is for contributors to add value to the developer portal. Business alignment represents how well the site makes it easy for the organization to access value.

To be successful in today’s crowded API marketplace, a developer portal must serve all three of the stakeholders these dimensions represent. The maturity model dimensions reflect how APIs must do more than just provide access to a service.

Each year, the entrants in this competition get better and the competition gets even more difficult to judge. It’s clear that the entrants are taking notes and applying what they learn.