The documentation cliff

For the past couple of months, I’ve been refactoring the piClinic Console software to get it ready for this summer’s field tests. Along the way, I encountered something I’d seen before, but never really named, until recently.

The documentation cliff.

A documentation cliff is where you get used to a certain level of documentation quality and support as you embark on your customer journey to use a new API and then, somewhere along the way, you realize that level of support has disappeared. And, there you are, like Wile-E-Coyote, floating in air, looking back at the cliff and looking down at where you are about to fall in the next instant.

Just kidding. What really happens is that you realize that your earlier plans and schedule have just flown out the window and you need to refactor the remainder of your development plan. At the very least, it means you’re going to have some uncomfortable conversations with stakeholders. In the worst-case scenario, you might need to re-evaluate the product design (and then have some uncomfortable conversations).

Most recently, this happened to me while I was using Postman to build unit tests for the piClinic Console software. I don’t want this to sound like I don’t like Postman–quite, the contrary. I love it. But that just makes the fall from the cliff hurt that much more.

How I got to the cliff

In my case, the tool was easy to get started with, the examples and tutorials were great, the online information was helpful–all the things that made a very productive on-boarding experience. So, I on-boarded myself and integrated the product into my testing. In fact, I made it the centerpiece of my testing.

As I refactored more code and wrote more tests. I could sense that the cliff approaching. As test cases multiplied and as I became more familiar with the tool and its capabilities, I was able to test a wider variety of results and conditions. All of my [albeit, limited] experience with the tool gave me the impression that it would do what I wanted (and more), but it was getting harder to find the documentation that would help me realize that potential. Not a showstopper, yet, but I had that sense I’d been here before.

I have countless examples of documentation cliffs where it was easy to get started, but the documentation would run out before I had finished the project. Of course, the project doesn’t stop just because the documentation ran out. What it means is that the slope of the hill to climb to finish the project just got a lot steeper.

And, over I go…

Finally, I got to the point where I knew that there had to be a function to skip a test before it ran, but nowhere could I find any documentation that addressed that particular case. At this point, solving that particular problem isn’t the problem. The problem is that it’s no longer as easy as I originally thought it would be and it has now become much more expensive to use than I anticipated.

Again, and don’t get me wrong, I still like the tool and I’m going to keep using it, but I have to re-evaluate the cost of using it. My original estimate of how much effort it will take to use it for my project needs recalibration to account for the apparent lack of documentation that has now become an obstacle to my progress.

What’s the problem?

The problem is the documentation supported getting started but not getting finished. The documentation supported only the most basic use cases to get started but didn’t provide the more specific details necessary to support applications beyond that–at least not in a way that is as visible as the getting started content.

In this case, using the product as I’d like, has gone from “it’s easy to find the answers and apply them” to “I’m going to have to start asking a lot of questions and doing a lot of reverse engineering.” Again, it’s not that this is bad, in and of itself, it’s just that it isn’t what the initial experience led me to believe I would be getting into. Bait and switch might be too strong, but it feels a little bit like that.

To use an example from my talk at last year’s Write The Docs PDX conference, it looked like I had found a glove, but it turns out that I also need to also knit some fingers.

I’d fallen off the cliff.

What causes the cliff?

It’s hard to say what caused this particular cliff, but these are some situations I’ve seen that have caused them in the past.

Resources

Resources, or a lack thereof, are usually the first suspect. Documentation resources are almost always scarce so they are applied to the highest priority (which often correlates strongly with the highest-visibility) topics first. By the time they finish those topics, the next crisis lands and they are off on the next set of high-priority topics. And, so it goes. From the resource-allocation perspective, this makes sense: if you can’t do everything, at least do the highest priority things. The problem is that this creates a certain amount of documentation debt that is easy to ignore.

Low visibility

The topics that, by their absence, create a documentation cliff don’t attract a lot of attention for a couple of reasons. First, if they haven’t been written, they don’t attract comments or feedback…or page views. Second, when they have been written, their low page view counts give the impression that few people, if anyone, read them.

The truth is, these topics are not read a lot. But, when they are read, they are read to solve a blocking problem for the reader, which makes those page views extremely valuable to the customer, even if the page views of any individual page are infrequent.

Reference topics are typically in the group of low visibility topics because their analytics are evaluated in a way that isn’t really appropriate to their use case. Unlike landing pages and tutorial topics which are used individually, reference topics are used as a single topic spread over many pages. If you want to evaluate how reference topics are being used, it is more accurate to aggregate their analytics into logical topic groups.

Short-term metrics

From looking at the content I could find, it seems heavily focused on adoption, almost to the exclusion of all else. Because you get what you count, this leads me to believe that their internal metrics focus more on adoption-related metrics than longer-term customer success. Adoption is important, clearly, but it’s not the only metric.

Longer term metrics might track feature usage, errors, retries, and so on to observe customer successes or failures in the real world. Some of this is almost automatic, some of this type of analytics require the instrumentation from the beginning, but it all works together to observe the customers’ progress from awareness to application. The problem with tracking a documentation-cliff event is that it’s hard to see using only web metrics. It’s hard to see the pages the reader couldn’t find to read or that don’t exist in the first place.

Samples and examples are higher priority (and more fun)

Sample code and example apps are certainly important, but they don’t solve the problems that reference topics do. Sure, they can demonstrate usage (an onboarding and introductory task), but they don’t show options, limitations, interactions, and other details that become visible and important as development progresses.

Honestly, as a writer, the samples and examples are much more fun to write and they make much better portfolio fodder than a collection of reference topics. I get that. (I don’t agree that’s how it should be, but I understand that it is what it is.) But, from the customer’s perspective, the usefulness of on-boarding topics to a single customer drops off quickly, unless, of course, the sample app is exactly what you wanted to release. However, in that case, you might want to rethink your product strategy.

How to avoid sending customers off a cliff

Here’s a checklist of things that can minimize the height of a possible documentation cliff:

  • Know the most common use-cases and document them longitudinally.
    • From the Hello World case…
    • To the sample app…
    • All the way down to the related reference topics.
  • Know the parameters and variations that customers are likely to make as they adopt your tutorials and demos to their own app.
    • You don’t want to clutter the demo with all the caveats, but…
    • Make sure the related reference topics clearly explain the options that customers are likely to apply as they adapt the examples.
  • Have a usable API/product.
    • The height of the cliff can be reduced by having a well-designed interface and consistent design paradigm.
    • A consistent interface can support user inferences when a specific answer is not documented.
    • Don’t assume your API is usable until you’ve had real customers test it.

Wherever possible, try to reduce the height of the documentation cliff. If not having one isn’t an option (ask yourself, why, but…) at least try to manage expectations from the beginning. I reviewed one documentation set that said, right in the beginning, something to the effect of “if you can’t build the code, you won’t understand the documentation.” That’s some good expectation management.

Consider where to put the documentation. Not all documentation is in help.myapp.com. Error messages might be the best place for documentation, depending on your API and your customers’ preferred coding style. The key is to meet your customers with the answers where they are with their questions.

In any case, if you’re going to make it easy to get started, please make it easy to get finished. Nobody wins when the customer can’t finish what they started with your API.

Leave a Reply