Developing software and driving off road

During my presentation at the API the Docs Chicago conference this week, I used an example from my time as a developer to describe a documentation cliff. I characterized it as driving along and encountering an obstacle, such as a muddy patch, that required extra effort to cross. Twenty-some years ago, I used to look for those situations to challenge my driving skills and my 1995 Land Rover Discovery, both of which were well suited to tackling them (however, the documentation of these events is still on paper and film, so I’m going to have to dig for some visual proof).

In the driving metaphor, to cross the challenge and continue on to the destination, I described employing the various tools on the vehicle, such as the four-wheel drive, “locking in the hubs” (a rather antiquated reference, these days), and pulling out the winch cable in addition to my driving skills. I was usually prepared (and often, over-prepared) for the challenges encountered in my off-road adventures, and if I wasn’t, the people I traveled with were. We usually arrived in tact.

I never realized the depth of this experience as a metaphor for my software development experience until I heard myself describe it in my presentation. While off-roading, not only was determining the destination a challenge (are we going to the right place?), but so was choosing the route, the tools, the pace, and the requisite skills to actually complete the trip. The parallels to my software development experience snapped into focus. The challenge of determining the right thing to do, the right way to do it, and the right tools to get the job done as a software developer became too clear to ignore.

In my presentation, I described this experience as what customers encounter when they use incomplete documentation. For example, when you see detailed on-boarding content (Hello World, tutorials, and such), it’s easy to get a sense that the company cares about your success with the API–after all, look at what they are doing to get you started. But, running into the documentation cliff, or mud bog to stick to the off-road driving metaphor, is what happens when a gap in the documentation abruptly halts your progress.

Continue reading “Developing software and driving off road”

The inverted funnel of API documentation

Click the image to download the PowerPoint slides, or download the PDF

I just got back from API the Docs, Chicago, 2019, where I had the pleasure of hearing from some dedicated practitioners in the field of API documentation talk to about 100 equally dedicated practitioners about API documentation. While I was there, I gave this presentation. The video of the presentation isn’t available, yet, so I have included a link to the slide deck and posted my edited notes below.


It was great to see so many people who shared a passion for making their customers successful through documentation. One of the aspects of technical writing that continues to motivate me is knowing that what I do is going to help a customer achieve and deliver their success.

My presentation was the last of the conference and, after hearing a lot of stories about every level of API documentation, I wanted to wrap things up by returning to the perspective of our customers’ and their journeys with our products.

Most recently, the customer journey has been more than an academic exercise to me, because, for the past several years, I‘ve been developing an information system for limited-resource clinics in Honduras. (

It’s been an incredible journey and one that has put me squarely in the customer’s seat with respect to API documentation. I have had use a lot of software documentation to bring the project it together and this experience has brought into focus a lot of the research I’ve studied and conducted on API documentation over the years.

Throughout my journey with the piClinic Console, my students and I have been studying users and usage to develop and improve the system. In just a few months, we will install the systems for their first field tests in Honduran clinics and we’ll know in a few months how we did. At the end of this summer, we’ll know more about our users and our system and our journey will continue.

That’s my latest journey.

I can say with complete confidence that software documentation played a critical role in helping get my project to and, ultimately, across the finish line. But, each of your customers has their own journey, and everyone at the conference was there to make it a successful one!

Continue reading “The inverted funnel of API documentation”

The documentation cliff

For the past couple of months, I’ve been refactoring the piClinic Console software to get it ready for this summer’s field tests. Along the way, I encountered something I’d seen before, but never really named, until recently.

The documentation cliff.

A documentation cliff is where you get used to a certain level of documentation quality and support as you embark on your customer journey to use a new API and then, somewhere along the way, you realize that level of support has disappeared. And, there you are, like Wile-E-Coyote, floating in air, looking back at the cliff and looking down at where you are about to fall in the next instant.

Just kidding. What really happens is that you realize that your earlier plans and schedule have just flown out the window and you need to refactor the remainder of your development plan. At the very least, it means you’re going to have some uncomfortable conversations with stakeholders. In the worst-case scenario, you might need to re-evaluate the product design (and then have some uncomfortable conversations).

Most recently, this happened to me while I was using Postman to build unit tests for the piClinic Console software. I don’t want this to sound like I don’t like Postman–quite, the contrary. I love it. But that just makes the fall from the cliff hurt that much more.

How I got to the cliff

In my case, the tool was easy to get started with, the examples and tutorials were great, the online information was helpful–all the things that made a very productive on-boarding experience. So, I on-boarded myself and integrated the product into my testing. In fact, I made it the centerpiece of my testing.

Continue reading “The documentation cliff”


Recently, my ham-radio hobby took a turn into the world of QRP (pronounced, cue-are-pee) as the hobbyists call it. QRP is a world where portability and low-profile are preferred over the big and large. The name is from the set of three-letter abbreviations adopted by amateur radio enthusiasts that codify a variety of ham-radio situations and each code start with the letter Q.

QRP means low power.

While General and Extra-class amateur radio operators are licensed to operate on the short waves (below a frequency of 30 MHz) using a transmitter power of 1,500 watts, power levels of 5 watts or less are used in the world of QRP. For comparison, AM broadcast stations transmit with up to 50,000 watts and FM broadcast stations transmit with up to 100,000 watts.

QRP stations are essentially whispering.

What’s the attraction? Mostly that it’s more difficult. QRP stations can’t rely on output power alone to help push their signals into the world so amateur radio operators like to credit their skill as being the differentiating factor. While there’s some truth to that, there are also many other factors the help the signals along the way.

One benefit of QRP stations is that they are very small, and comparatively lightweight making them easy to carry. The radios are generally battery-powered, easy to pack, and easy to carry. More traditional, home-based ham radio stations use a 100-watt radio, such as the one I use from my house. Unlike my lightweight, battery-powered, QRP radio, my ham radio at home requires an AC power supply or a very large (i.e. heavy) battery. A so-called legal-limit, 1,500-watt station requires an electric power that is comparable to what you would use for an electric clothes dryer–making it something less than portable. For the “ham on the go,” QRP is an easy way to take your hobby on the road.

Because QRP operations are the ham-radio equivalent to whispering, not everyone can hear you. However, with the right combination of weather conditions, frequency, antenna, skill, and luck, 5 watts of radio-frequency energy can go quite a long ways–even further if the receiving station has a powerful antenna to help hear your whisper among the other stations.

My QRP station

My portable station, in the photo below, consists of:

Continue reading “Cue-Are-Pee?”

If we could only test docs like we can test code

Postman logo
Postman logo

As I continue to catch up on delinquent and neglected tasks during the inter-semester break, I’ve started porting the software from last year’s piClinic Console to make it ready for prime time. I don’t want to have any prototype code in the software that I’ll be leaving in the clinics this coming summer!

So, module by module, I’m reviewing the code and tightening all the loose screws. To help me along the way, I’m developing automated tests, which is something I haven’t done for quite a while.

The good news about automated tests is they find bugs. The bad news is they find bugs (a lot of bugs, especially as I get things off the ground). The code, however, is noticeably more solid as a result of all this automated testing and I no longer have to wonder if the code can handle this case or that, because I’ll have a test for that!

With testing, I’m getting to know the joy that comes with making a change to the code and not breaking any of the previous tests and the excitement of having the new features work the first time!

 I’m also learning to live with the pain of troubleshooting a failed test. Anywhere during the test and development cycle a test could fail because:

  1.  The test is broken, which happens when I didn’t update a test to accommodate a change in the code.
  2. The code is broken, which happens (occasionally).
  3. The environment is broken. Some tests work only in a specific context like with a certain user account or after a previous test has passes.
  4. Cosmic rays. Sometimes they just fail.

The challenge in troubleshooting these test failures is picking the right option from the start to make sure you don’t break something that actually was working (but whatever was really broken is hiding that fact).

But, this is nothing new for developers (or testers). It is, however, completely foreign to a writer.

Here are some of the differences.

Continue reading “If we could only test docs like we can test code”

Recent good, could-be-better, and ugly documentation experiences

During the “break” between semesters, I try to catch up on the tasks I’ve deferred during the past semester and get ahead of the tasks I know will come up during the coming semester. In the process, I’ve had these encounters with documentation that I’ve characterized into these categories:

  • GOOD: documentation doing what it should—making me(the customer) successful.
  • COULD-BE-BETTER: documentation that is well-intentioned, but needs a tweak or two.
  • UGLY: documentation that gives me nightmares.

Here goes.

Good documentation

These are the stories of documentation that made me successful. Being successful makes me happy. Good documentation should help make the reader successful.

Ford Motor Company. F-150 owner’s manual

The windshield wipers on my almost 3 year-old truck are the ones that it came with from the factory— almost 3 years ago. Well past their expiration (and effectiveness) date. But, while I’ve changed car engines and gear boxes before, I hate changing wipers. They always have some clever (and obscure) trick to getting the old ones off and the new ones off. I especially hate it when they go flying off the car in a rain storm. So, for all these reasons,I’ve procrastinated on changing them for far too long (as driving in recent wet weather has reminded me)—the replacements have actually been in my garage since…I actually can’t remember.

Continue reading “Recent good, could-be-better, and ugly documentation experiences”

I just can’t see myself as a customer

How often have you been shopping and, while looking at something that might be suitable, you just had the feeling that it wasn’t right? When shopping for houses last year, we had that feeling many times (too many times). They were, for the most part, all nice homes, but they just didn’t do “it” for us. Sometimes we could articulate the reasons, other times, it was something less tangible. In every case, except for the last one, of course, we walked away without becoming a customer.

We finished house hunting about two years ago and I hadn’t given that feeling a second thought until I came across a blog post about making the Build vs. Buy Decision. Point 4 of that post, whether it is cheaper to build than to buy, got me thinking about one of the value propositions that technical communication frequently asserts–to increase product adoption. To tie it back to house hunting, let’s call it making it easy for potential customers to see themselves using your product and becoming a customer instead of just a prospect.

Try it, you’ll like it!

In API marketing, one of the methods recommended to promote your API to potential customers is to provide some easy way for potential customers to take a test drive. Options such as providing a sandbox, having an accessible Hello World application or example, and providing tutorials that provide solutions to common customer pain points. The idea is to get them into the driver’s seat and let your potential customers take it for a spin. The lower the barrier to entry, the easier it is for your potential customers to see if this is a product for them and to see themselves as satisfied customers.

Continue reading “I just can’t see myself as a customer”

piClinic Console presented at IEEE GHTC

Image of piClinic Console prototype which consists of a monitor, keyboard, and mouse
piClinic Console prototype

I presented another paper about the progress made developing the piClinic Console at the Institute of Electrical and Electronics Engineers Global Humanitarian Technology Conference, or IEEE GHTC for short. The paper, titled Bridging the Gap between Paper Patient Records and EHR Systems with the piClinic Console will soon hit the Web in the IEEE Xplore Digital Library (subscription required). This latest paper describes more about the technical aspects of the system and its development and compliments the papers published earlier this year that describe how the project has been used to support educational goals (Using Independent Studies to Enhance Usability Assessment Skills in a Generalist Program) and foster international design collaboration (Enriching Technical Communication Education: Collaborating Across Disciplines and Cultures to Develop the piClinic Console).

The conference was the first real opportunity for me to discuss the project with others in the humanitarian technology field, which was both encouraging and discouraging at the same time. I was encouraged to hear that the idea still seems sound and the need was recognized by everyone with whom I talked. There’s really no question that the gap it is designed to fill is a real one. At the same time, I’ve been calibrating my expectations on how long things take, not only in the healthcare field, but with foreign government agencies as well. Those who travel regularly in these sectors will likely not see this as news, but, coming from high tech, my time scale needs some serious recalibration.

Wake-up call

At the conference, one research described a similar project  (a very similar project) that he’s been working on for the past 17 years to get past the field-test stage (the stage my project is just starting to enter). He described how he’s had to navigate various health ministries across the African sub-continent as well as EU funding agencies. While I’m [finally] realizing that such time frames are quite reasonable in this context, my high-tech industry can’t help but think of what was going on 17 years ago and how much has changed. For example, in 2001:

Continue reading “piClinic Console presented at IEEE GHTC”

What’s your story?

For a few years, I tried my hand at filmmaking. I was OK at it; however, it was during a time when the competition was amazing. OK, wasn’t good enough. Fortunately, I had a backup job and, after a couple of years of being OK at filmmaking, I returned to technical writing. I was surprised to find that after being a filmmaker for a while, my technical writing had, somehow, improved, even though I did absolutely zero technical writing as a filmmaker.

Here are some of the lessons I learned as a filmmaker that apparently had more value when applied to technical writing.

All that matters is what people see on the screen

As an independent filmmaker I watched many independent films. Those scars will be with me for the rest of my life (If you’ve watched indie films, you know what I mean. If you haven’t, you’ve been warned). Some of my films must have scarred others (sorry). I believe filmmaker Robert Rodriquez said in Rebel Without a Crew, that you needed to make several hundred films before you become a filmmaker. I would agree. I still had a few hundred more to go before I ran out of money.

Before watching others’ films, the Directors (or, more often, director/producer/writer/photographer/star) would give a brief commentary and, invariably, these commentaries included all the things that didn’t go as hoped or as planned—preparing us for what were about to experience (i.e. lowering our expectations). While of some interest to the filmmakers in the audience, this information was irrelevant to anyone else.

What matters to the audience (i.e. those who might actually pay to see/rent/download the film) is what is on the screen. Period.

Tech writing takeaway: the reader will judge the content for what they see and not what you would like them to see.

Continue reading “What’s your story?”

What were they thinking?!

Design reviews or critiques are a favorite pastime. While my righteous indignation of having to suffer a bad website can still evoke some pointed criticism, I’m trying to keep things in perspective.

I’ve talked about critiquing designs in the past, but here, I’m talking about those informal critiques of designs created or constructed by other people or organizations. You know, the ones that usually start with “WTF were they thinking!?!”

The objects of such informal critiques are legend. A search for design fails in Google returns 270,000,000 results to me (in 0.3 seconds, so it must be a popular search). I remember talking about some of these design fails in design classes, but I’ve been involved with designing, building, and shipping long enough to know these examples of design fails are team efforts.

Full disclosure, I still shout “WTF?!” from time to time in spite of the following introspection, but my hope is that reflecting on these points will help me view them with a little more perspective. I write this so that you might, as well.


Let’s start with that point when you’re using a website that is sporting the latest version of Gordian-knot navigation or TV remote control and you’re wondering aloud how anyone could design such a mess, let alone ship it for what seems like the sole purpose of causing you grief. Of course, YOU would NEVER commit such a crime against humanity!

Would you?

Have you? (Be honest.)

If you’ve shipped more than one or two documents, products, web-sites, or you name it, then in all likelihood you, too, have published or shipped something that someone, somewhere, at some time will say WTF?! when they use it. It’s almost impossible not to and here’s why.

Why do people ship bad designs?

Continue reading “What were they thinking?!”