How you can make sense of your site analytics

If you’ve watched any of your website’s analytics, such as page views or unique visitors, you’ve probably seen something like this chart and wondered, what does that even mean?

Graph of DocsByDesign.com website traffic for 2020 showing a lot of variation.
DocsByDesign.com website traffic for 2020

I know that I have, and I studied this kind of stuff for my Ph.D. All this wiggly-squiggly! What’s going on?

I’ve seen this type of graph just about any time I’ve plotted website data for just about any developer doc site I’ve worked on, and I’ve wondered (and had management ask me), does this show anything we should be concerned about? For the longest time, I’ve always answered with a shrug of some sort.

But now, I think there might be a way to makes sense of this data.

Continue reading “How you can make sense of your site analytics”

I love it when things just work

Bob Watson piloting a light plane on a sunny day as it approaches the runway to land

The image is a still frame from a video I pulled out of my archive to edit and an example of things just working–I’m on the final approach to a silky touchdown at Orcas Island airport.

In user experience parlance, they call that customer delight. I recently had some experiences as a customer that delighted me. It was amazing!

I hope that my readers get to experience similar delight when they read my docs. Let’s unpack these recent delights to see how they might help improve my writing.

The experiences

It really started with a recent disappointing purchase experience, but first some back story.

About 20 years ago, I used to edit videos, among other things. Back then, computers took a lot of tuning (i.e. money) to meet the processing demands of video editing and effects. After several software and hardware iterations, I finally had a system that had the industry standard software running on a computer that could keep up with the challenge of video editing.

With that, I could finally focus on the creative and productive side of editing without having to fuss with the computer all the time. It’s not that I minded fussing with the computer–after all, that’s what I had been doing all along to get to this state of functionality and reliability. Rather, I don’t like fussing with it when I have other things that I want to accomplish.

It was truly a delight to be able to focus on the creative and productive aspects of the job. Having reliable tools made it possible to achieve flow. If you’ve ever achieved that state, you know what I mean. If not, read Finding Flow: The Psychology Of Engagement With Everyday Life by Mihaly Csikszentmihalhi.

Fast forward to this past week.

I finally upgraded my very-consumer-y video editor (Pinnacle Studio) to edit some home videos. I’d used an earlier version a few years back and I recall it having a pretty low learning curve for what I wanted to do. But my version was getting stale, and they were having a sale, so…

I paid my money, got my download, and was ready for the delight to begin!

Not so fast. There would be no delight today.

Continue reading “I love it when things just work”

Is there any more room for innovation in tech writing?

I was looking through some classic (i.e., old) examples of technical writing and noticed how the format of application programming interface (API) reference topics hasn’t really changed since these examples were published in 1988 and 1992.

Is this because there’s been no innovation in technical writing during the intervening 30-ish years or, perhaps, we’ve found something that works, so why change? Taking that a step further, if what worked 30 years ago still works, is there any more room for innovation in tech writing?

Here are a couple of examples that I found in my library of software documentation. (It’s a library of printed documentation so there’s not much in there from the 21st century.)

MS-DOS Encyclopedia (1988)

Reference topic from MS-DOC Encyclopedia

The first example of classic documentation is from the Microsoft MS-DOS Encyclopedia (Ray Duncan, Microsoft Press, 1988), a 1,570-page collection of everything you’d need to know about programming for MS-DOS (v3.2) in 1988.

It starts with how MS-DOS was originally developed, continues with conceptual overviews of the different operating system functions, how to create common applications and extensions to the operating system, and various reference topics, such as the interrupt example I included here. It’s a one-stop reference manual that would prepare any MS-DOS programmer or device-driver developer for successful coding experiences.

This 33-year-old encyclopedia presents information in a format that you can still see used today.

  • Overview
  • Conceptual content
  • How-to articles of common tasks
  • Reference topics on various aspects of the product
  • Cross references as make sense

The content in the example reference pages that I included also follows a format that is still seen today:

Continue reading “Is there any more room for innovation in tech writing?”

What do you think of our docs?

In most technical writer interviews I’ve had for an organization with public-facing docs, I’ve been asked, “Did you get a chance to read our docs,” which they invariably follow with, “So, what did you think of them?” How should you answer? Here’s what I’ve learned, the hard way.

The answer to the first one, should be a confident (and honest), “Yes!” For, hopefully, obvious reasons. When I’ve interviewed candidates, I have to admit that I wonder why anyone would answer no (and some have). I don’t expect a detailed content inventory but opening a web site and flipping through a couple of pages seem like the least a candidate who is interested in writing them could do—if for no other reason than to see what they’d be getting themselves into.

As to the second one, that has always felt like a bit of trap. For good reason! Here are some possibilities and the traps that lie within.

Their docs are great!

They might be. If so, that’s a good place to start and you should keep going. Saying, “they’re great!” and then smiling politely tells me you either didn’t look at them and don’t want to sound rude or you did but weren’t looking at them with a very critical eye.

What can you say if they really are impressive? Talk about what makes them great and be specific. Some things to talk about include (in no particular order):

  • The visual design: Is it attractive? Is it functional? Does it help you find the information on the page?
  • The content on the pages: Is it easy to read? Is there sufficient white space? Does it have illustrations? Does it have code samples? Can you understand it? …even if you’re not familiar with the topic?
  • The organization and navigation: Are they clear and helpful?
  • The performance: Is the site responsive?

These are some generic qualities that apply to almost any type of documentation. To get more specific, you’ll need to find out more about their audience and documentation goals.

Don’t assume you know what they’re trying to accomplish with their docs! (This is the mistake I used to make. Every time.)

Rather, this is where you turn the conversation back to them and have them describe things like their:

  • product goals
  • documentation goals
  • audience
  • product
  • competition
  • work/task scheduling

These are important aspects of the context in which the documents are produced and used, so you’ll want to know about them before continuing your critique.

The way to impress your interviewer with what you know is by the questions you ask as much, if not more than, the answers you give.

Continue reading “What do you think of our docs?”

Reflections as I begin my third time on jury duty

Diagram of an API as a gear with connection points

Today I met with my co-jurors who’ll be judging this year’s DevPortal awards nominees with me in the coming weeks. The entrants in this year’s showcase represent an impressive effort on the part of the nominees, so we have our work cut out for us. This is my third year on the jury and I’m looking forward to this year’s entries.

What struck me as we kicked off this year’s evaluation today was the 15 different award categories this year–up from last year’s eight categories–and how the presentation of APIs to the world has changed over the years. What impresses me every year is the innovation applied to make this presentation effective and engaging.

Pronovix hosts this event and they’ve modeled this year’s 15 categories around the Maturity model for devportals, which describes these three dimensions of developer portal maturity.

  • Developer experience
  • Business alignment
  • Operational maturity

When I judged the entries last year, I approached it from a usability perspective–how easily could the customer do what they needed to do? From that perspective, the maturity model dimensions represent the usability of the site from the perspectives of different stakeholders involved with an API developer’s portal.

From the perspective of ease of use, developer experience represents how easy the site makes it for the reader to get value from the product. Operational maturity represents how easy it is for contributors to add value to the developer portal. Business alignment represents how well the site makes it easy for the organization to access value.

To be successful in today’s crowded API marketplace, a developer portal must serve all three of the stakeholders these dimensions represent. The maturity model dimensions reflect how APIs must do more than just provide access to a service.

Each year, the entrants in this competition get better and the competition gets even more difficult to judge. It’s clear that the entrants are taking notes and applying what they learn.

Filmmaking lessons that improved my technical writing

Bob sitting next to 16mm movie camera
Cinematographer Bob on location

Some time ago, I was a filmmaker. Honestly, I wasn’t especially good at it. I wasn’t bad, just OK. However, while I enjoyed the work, using my checkbook balance as a metric, I wasn’t good enough at it to make a living. Because of that, I’m a technical writer.

Filmmaking, however, taught me a lot about technical writing, so I thought I’d share a few of the lessons I learned.

A high-quality film is not the same as a good film

I’m sure you can recall a film that was awful. It could have had excellent lighting, exposure, audio, soundtrack, etc. but you still wonder how you’ll ever get back those 90 minutes of your life. There are many excellent technicians in the film industry who produce technically high-quality material. And yet, somehow all that high-quality material results in a film that is painful to watch endure.

What I learned was that ALL the elements of a film must work towards the goal of telling the story or the film doesn’t work. It’s surprisingly binary. Just being good in a few categories is rarely enough to carry a film.

Except for the story, being good in one aspect rarely makes up for being bad in another. I was a filmmaker before YouTube and home-made videos–around the time reality shows started becoming popular. The importance of story over technical quality (caveat: the audio must always be acceptable) was clear. Since then, what you see trending on YouTube should convince you that story is still king.

With technical docs, I see a lot of concern over “technical” quality, such as spelling, language, vocabulary, and bugs fixed. I’m not saying these elements aren’t important. But, if you’re not telling your audience what they want to know, how well it’s spelled isn’t going to matter. I was surprised to see a similar discount of technical quality in my dissertation study (see API reference topic study – summary results). The problem, unfortunately, is that it’s easier to count these technical qualities, so it’s deceptively attractive to equate technical quality with document quality or utility. Technical quality might be a factor in utility, but it’s not a proxy.

Don’t give the audience a reason to leave

A film must tell a story in a way that keeps the audience wanting to know what’s next. Whether the film is a 10-second commercial or a 90-minute feature, crafting a story in such a way is a skill that takes practice and a knowledge of your audience. It’s an aspect of the film that starts with the script and must be supported all the way through production, editing, and release. It’s one of those things that, if not done well, can result in one of the bad examples I referred to in the previous lesson.

Continue reading “Filmmaking lessons that improved my technical writing”

Documentation metrics, again?!

A digital voltmeter being prepared to measure a web page.
If only this was all you needed to measure a web site.

To get back into the blogging habit, I thought I’d rummage through some of my earlier posts to see if there might be something I could recycle. (How does the saying go? Good designers copy, great designers steal?) Because the topic of documentation metrics came up recently at work, I thought I’d start there and see if I had said anything about that before.

It turns out, I’d written a post or two on the subject, so let the theft recycling commence!

In The answer is Google Analytics—what was the question? I talk about how the web interaction model that Google Analytics (GA) is optimized for comes up short for a lot of user assistance content. True confession: GA is wired up to this site, and in Google Analytics just makes me sad, I followed up with a summary about how that works for me. Hint: the title sort of gives that away.

The premise of conflicting models came from a paper that my Ph.D. advisor and I wrote about the challenges of collecting useful data about your non-funnel-oriented web content. I posted a blogified version of the published paper in the series on Readers goals. Basically, if the reader’s reason for reading an article is to accomplish a goal that’s outside of the documentation, it’ll be difficult to measure how the documentation helped that goal from inside of the documentation.

I was on a roll, four years ago, because, I continued with Measuring your technical content – Part 1, Part 2, and Part 3. Part 1 talks about asking the questions you want your analytics and analysis to answer and the instrumentation that can help make that happen. Parts 2 & 3 bring this exercise into a sharper focus by trying it out on getting started topics and tutorials, respectively. That thread addresses some of the challenges that you might encounter along the way in Measuring your technical content – What about…?

Continue reading “Documentation metrics, again?!”

Tap, tap…is this on?

I’m back at the blog, after a year, five months, and a couple of days. As my first blog post in a while, please be patient. It might be a bit rough.

How time flies. To catch up…

I’ve been working as technical writer since last January. Although I enjoyed working as an academic, it became clear that academia and I weren’t really cut out for each other. So, I’m back in the Pacific Northwest writing technical documentation for Amazon Web Services.

Returning to industry

It was almost as if I’d never left. I have to credit the amazing group of people I work with for making the transition a smooth one. Working in big tech is pretty much as I recall: lots to do, little time to do it—so, maybe not that different from academia.

While Amazon, in general, is famous for its short average tenure, on our team, the low average tenure is simply because we’ve been hiring new writers and we’re looking for more. We’ve got a lot of work to do! Good people, interesting work, decent pay…can’t complain, and did I mention that we’re hiring?

I’m finding the work interesting in all the dimensions that intrigue me: Technical novelty, information design, and collaboration with engineering.

The part of the product I work on has been around for a while and is still growing, so addressing the information needs of both new and experienced users presents some interesting challenges–made even more interesting by the short schedules. The collaborative group of engineers, designers, and writers I work with, however, make tackling these challenges both enjoyable and possible.

What’s new in the scholarly literature?

While catching up, I thought I’d see what’s new in the API documentation research. A quick (and by no means comprehensive) survey of the academic work that’s been published on API documentation since the list of New articles on API documentation I posted three years ago shows that software developers:

  • Still want better docs with more code samples.
  • Are still trying to create them with automation (instead of tech writers)
  • Are still not citing work from actual technical communication research.
  • Still don’t seem to do much research past surveys.

Sigh. Everything old, is new again.

I’ll organize this more later (and hopefully change some of these initial impressions for the better). In the meantime, I’m waiting for the research that explores Why is this still the case after over a decade of research pointing out the issues (repeatedly)? This 2018 paper by Murphy et al. is in the right direction by exploring some of the challenges that practitioners face trying to design usable APIs–a prerequisite for making it easier to create usable API documentation. Maybe there’s a research opportunity for a similar study with technical writers.

Nevertheless, it’s good to see all the research being published on API documentation. It’s much better than the comparative handful of articles that were available on the topic when I started researching it in 2008.

Murphy, L., Kery, M. B., Alliyu, O., Macvean, A., & Myers, B. A. (2018, October). API designers in the field: Design practices and challenges for creating usable APIs. In 2018 ieee symposium on visual languages and human-centric computing (vl/hcc) (pp. 249-258). IEEE.

The documentation cliff

For the past couple of months, I’ve been refactoring the piClinic Console software to get it ready for this summer’s field tests. Along the way, I encountered something I’d seen before, but never really named, until recently.

The documentation cliff.

A documentation cliff is where you get used to a certain level of documentation quality and support as you embark on your customer journey to use a new API and then, somewhere along the way, you realize that level of support has disappeared. And, there you are, like Wile-E-Coyote, floating in air, looking back at the cliff and looking down at where you are about to fall in the next instant.

Just kidding. What really happens is that you realize that your earlier plans and schedule have just flown out the window and you need to refactor the remainder of your development plan. At the very least, it means you’re going to have some uncomfortable conversations with stakeholders. In the worst-case scenario, you might need to re-evaluate the product design (and then have some uncomfortable conversations).

Most recently, this happened to me while I was using Postman to build unit tests for the piClinic Console software. I don’t want this to sound like I don’t like Postman–quite, the contrary. I love it. But that just makes the fall from the cliff hurt that much more.

How I got to the cliff

In my case, the tool was easy to get started with, the examples and tutorials were great, the online information was helpful–all the things that made a very productive on-boarding experience. So, I on-boarded myself and integrated the product into my testing. In fact, I made it the centerpiece of my testing.

Continue reading “The documentation cliff”

If we could only test docs like we can test code

Postman logo
Postman logo

As I continue to catch up on delinquent and neglected tasks during the inter-semester break, I’ve started porting the software from last year’s piClinic Console to make it ready for prime time. I don’t want to have any prototype code in the software that I’ll be leaving in the clinics this coming summer!

So, module by module, I’m reviewing the code and tightening all the loose screws. To help me along the way, I’m developing automated tests, which is something I haven’t done for quite a while.

The good news about automated tests is they find bugs. The bad news is they find bugs (a lot of bugs, especially as I get things off the ground). The code, however, is noticeably more solid as a result of all this automated testing and I no longer have to wonder if the code can handle this case or that, because I’ll have a test for that!

With testing, I’m getting to know the joy that comes with making a change to the code and not breaking any of the previous tests and the excitement of having the new features work the first time!

 I’m also learning to live with the pain of troubleshooting a failed test. Anywhere during the test and development cycle a test could fail because:

  1.  The test is broken, which happens when I didn’t update a test to accommodate a change in the code.
  2. The code is broken, which happens (occasionally).
  3. The environment is broken. Some tests work only in a specific context like with a certain user account or after a previous test has passes.
  4. Cosmic rays. Sometimes they just fail.

The challenge in troubleshooting these test failures is picking the right option from the start to make sure you don’t break something that actually was working (but whatever was really broken is hiding that fact).

But, this is nothing new for developers (or testers). It is, however, completely foreign to a writer.

Here are some of the differences.

Continue reading “If we could only test docs like we can test code”