Looking back to look ahead

It’s been a while since I’ve contributed to my blog. The truth be told, I’ve been busy.

The day after my birthday, 2023, was when I found out that, along with 12,000 of my now former coworkers, I no longer had a job with Google.

I remember the feeling when I got the news (by email, of course). I felt like when I was learning to fly, and the instructor cut off the engine mid-flight. One minute you’re looking out the window, checking off the waypoints to your destination. The next, you’re looking for where you’re going to land. Because you weren’t going to be flying for much longer.

For the non-pilots reading this, most light airplanes keep flying after an engine failure. They just don’t stay at the same altitude for very long.

Learning to fly taught me that in those situations, the longer you ignore the reality of the situation, the shorter your list of options becomes. So, if they’re laying off thousands of people in my industry and, as we’d see in the months that followed, this would be just the beginning, I concluded that this was the end of my professional career as I had come to know it.

I was now gliding.

However, that’s a story for another post (or two).

A soft landing

The good news is (as it is for the pilots who are prepared for the possibility of an engine failure), my wife and I have landed safely and we’re doing fine.

My post-career landing was softened by an opportunity to teach an API documentation course at the University of Washington’s Professional and Continuing Education school. I just wrapped up the third term, last week, and it’s been a lot of fun.

However, the past two years have brought seismic shifts to technical writing, particularly in API documentation. Large language model tools have reshaped how we approach documentation creation, analysis, and maintenance. As practitioners, we’re all grappling with the same fundamental questions:

  • How do we adapt our established practices?
  • What assumptions about our craft need revisiting?

Enter AI and the curriculum challenge

Large language model (LLM) tools have taken the world by storm in the past two years. API documentation hasn’t been immune to their influence. As such, I’ve been working on an update to my API documentation course to integrate AI technologies to keep the curriculum current.

The challenge isn’t just adding AI tools to the syllabus. It’s understanding how these tools change the fundamental nature of documentation work. What skills remain essential? What new competencies do we need to develop? How do we teach both the power and limitations of AI-assisted documentation?

As I update the API documentation course, I’ve been putting different AI tools to the test, with some rather interesting results.

I’ve been using examples of my writing as the fodder for AI use-case examples. I picked my content mostly because I know the copyright holder, but I also have enough examples to see how the AI behaves under different circumstances to get material for the course update.

Testing assumptions with evidence

One of the things that even the AI tools describe as something they’re good at (an assertion that’s held up well under experimentation) is text analysis. So, I thought I’d see what it could make of my dissertation. That’s certainly a large bucket of text!

Ironically, I’m using AI to analyze research about human behavior in documentation contexts. The tool that comes with promises to change how we create documentation is helping me understand how humans use it.

Here’s how it summarized the findings from my dissertation:

Study Overview

Research Question: How do visual-design element count (VDEC) and information-concept element count (ICEC) affect software developers’ speed and accuracy in assessing API reference topics, plus their perceptions of credibility and professionalism?

Method: Remote, unmoderated online study with 201 software developers from 30 countries, testing 4 tasks across different API topic variations.

Key Findings

Performance Effects

  • Visual design had NO significant impact on relevance-decision speed or accuracy
  • Information content DID matter: Topics with fewer information concepts (low ICEC) were assessed ~20% faster (38.8 vs 48.0 seconds)
  • Accuracy was mixed: High ICEC topics were more accurately assessed when relevant to the task (89.7% vs 75% correct)

Perception Effects

Both visual design and information content significantly affected how developers perceived the documentation:

  • High VDEC topics rated as more credible and professional-looking
  • High ICEC topics also rated as more credible and professional-looking

Surprising Insights

  1. Developers read more than skim: Average 43+ seconds suggests thorough reading rather than quick scanning
  2. Visual design expectations vs. reality: Despite web design best practices emphasizing visual elements for skimming, they didn’t improve performance in this context
  3. The “less is more” trade-off: Shorter content improved speed but hurt accuracy for relevant topics and reduced credibility perceptions

Implications

  • For practitioners: Focus on information content over visual design for performance gains, but consider both for credibility
  • For researchers: Challenges common assumptions about web content being “skimmed” in technical contexts
  • For the field: Demonstrates need for context-specific testing rather than applying general web design principles universally

The study essentially found that software developers treat API documentation differently than typical web content – they read thoroughly regardless of visual design, making content quantity the primary performance factor.

Not bad for a summary of a 250+ page paper. It might have been nice to also include some of the conditions and qualifications of the study. For example, this was a study of just a few topics and shouldn’t be generalized to every topic in every documentation set.

Nevertheless, I still feel strongly that some of the takeaways identified in the study continue to hold true today, at least until another study provides evidence to the contrary.

What this means for our practice

The AI-generated summary captures the core findings, but the real value lies in understanding what these insights mean for how we approach developer documentation today, especially as AI tools become more prevalent.

There’s still a lot of room for further study

(I know, that’s what an academic would say) The emergence of AI-assisted documentation tools makes this research gap even more critical. How do developers interact with AI-generated documentation? Do they trust it differently than human-authored content? How does the availability of AI tools change their information-seeking behaviors?

We’re making decisions about AI integration based on assumptions about developer behavior that may not hold true in the novel circumstances that AI-supported documentation might offer. The field needs more research specifically examining how AI tools affect the developer documentation experience, not just how they affect the creation process.

The biggest (and least popular) takeaway

The most important insight from this research journey isn’t about visual design or information density. It’s about the critical importance of challenging our assumptions with evidence.

As AI tools promise to solve our “documentation challenges,” we need a way to test whether such claims are based on vendor interest and general enthusiasm or the rigorous testing of their effectiveness in our specific contexts?

The research process taught me that the most valuable insights often come from questioning what “everyone knows” to be true. In a field evolving as rapidly as ours, maintaining that healthy skepticism while remaining open to new possibilities isn’t just good practice, it’s essential for serving our users effectively.

Looking ahead

That “engine failure” moment in 2023 forced me to reconsider everything I thought I knew about my career and the documentation field. Teaching has given me a front-row seat to how the next generation of documentation professionals is thinking about these challenges. Working with AI tools has shown me both their potential and their limitations.

What’s most striking is how many of the fundamentals of this niche of technical writing are not well tested, even as new tools promise to revolutionize it. The research questions that seemed purely academic a few years ago now feel urgent and practical.

  • How do our users interact with the documentation?
  • What do they really need from our documentation?
  • How can we measure whether our efforts are making a positive difference?

As in the gliding analogy: we can’t control the external forces shaping our field, but we can control how we respond to them. The longer we ignore the reality of our situation—the rapid pace of change, the gaps in our knowledge, the need to adapt our practices, the shorter our list of options becomes.

The good news is that we’re not flying blind. We have tools for understanding user behavior, methods for testing our assumptions, and a growing body of research to guide our decisions. We just need to use them more consistently and share what we learn more openly.

After all, the best documentation has always been about helping users accomplish their goals efficiently and effectively. The tools may change, but that fundamental purpose remains constant. Everything else is just details we can figure out together.

Leave a Reply