What Makes Documentation AI-Ready: Media

In previous posts, I’ve explored how structural principles, writing quality, and code example pedagogy that serve human readers also serve AI tools. Those posts focused on text-based content found in all technical documentation.

This post addresses media: images, videos, diagrams, and audio. These elements appear less frequently in API documentation than in other technical content, but they’re valuable for tutorials, demonstrations, system architecture explanations, and physical procedures.

The patterns that make media accessible to AI tools aren’t new requirements. They’re the same accessibility standards that have served vision-impaired readers, users who have difficulty hearing audio content, and people with cognitive disabilities for decades. Once again, the fundamentals endure.

What accessibility standards tell us about media

The Web Content Accessibility Guidelines (WCAG) have provided clear requirements for accessible media since the late 1990s. These requirements weren’t created for AI—they were created for people who can’t see images, can’t hear audio, or need alternative ways to process information. To address these cases, they recommend:

For images:

  • Provide text alternatives (alt text) that describe content and function
  • Include longer descriptions for complex images like diagrams
  • Don’t rely on color alone to convey information
  • Ensure text in images is also available as actual text

For video:

  • Provide synchronized captions for deaf and hard-of-hearing users
  • Provide audio descriptions for blind and vision-impaired users
  • Provide transcripts that include both dialogue and important visual information
  • Identify speakers in multi-person videos

For audio:

  • Provide text transcripts
  • Identify speakers
  • Note non-speech sounds that convey meaning

These requirements are the result of thirty years of refinement based on how people with disabilities use content.

How AI tools process images

I asked Claude how it processes images in documentation. The patterns it described align exactly with accessibility requirements.

Continue reading “What Makes Documentation AI-Ready: Media”

What Makes API Documentation AI-Ready: Code Examples

In previous posts, I explored how structural principles and writing quality standards that serve human readers also serve AI tools. Those posts focused on the prose found in all technical documentation. API documentation presents an additional challenge: teaching concepts through code examples.

I asked Claude what makes code examples easy for an AI tool to process. The patterns it identified align with established principles from instructional design and technical communication pedagogy. Once again, the fundamentals endure—whether you’re teaching human developers or enabling AI tools to extract meaning from documentation.

Code presentation patterns that help processing

Claude identified the following helpful patterns for presenting code in documentation.
I’ve added comments in italics to Claude’s conversation for additional context and clarification.

Syntax highlighting with language specified

```javascript
// Language tag makes intent clear
const result = await fetchData();
```

versus unmarked blocks requiring inference

Complete, runnable examples

  • Includes necessary imports and dependencies
  • Shows actual input and expected output
  • Avoids fragments with ellipses everywhere

I discuss the nuance of this later in the article.

Inline comments explaining non-obvious logic

```python
# Convert UTC to user's local timezone before display
local_time = utc_time.astimezone(user_tz)
```

Variable names that indicate type and purpose

  • userConfig vs data
  • isAuthenticated vs flag
  • errorMessage vs msg
Continue reading “What Makes API Documentation AI-Ready: Code Examples”

What Makes Documentation AI-Ready: Quality Writing

In my previous post, I explored how document structure principles that have served human readers for decades serve AI tools just as well. That post focused on structure and metadata, skipping the actual words within that structure.

I asked Claude what writing patterns make content easy or difficult to process. The principles it listed match the guidelines in the technical communication textbook I used teaching undergraduates. The fundamentals endure, whether you’re writing for human readers or AI tools.

Text patterns that help processing

These patterns reduce ambiguity and make relationships between ideas explicit. They help both AI tools and human readers follow your logic and find the information they need.

Claude identified these helpful writing patterns, which align with established technical communication principles:

Clear topic sentences

  • First sentence of paragraphs that state the main point
  • Reduces ambiguity about what follows

Explicit connectives

  • “However,” “Therefore,” “For example,” “In contrast”
  • Signal logical relationships between ideas

Defined terms on first use

  • “React (a JavaScript library)” vs just “React”
  • Especially important for acronyms and jargon

Concrete nouns over pronouns

  • “The authentication token” vs “it”
  • Critical when text is excerpted or out of context

Consistent terminology

  • Using “user session” throughout vs switching between “session,” “user context,” “active connection”
Continue reading “What Makes Documentation AI-Ready: Quality Writing”

What Makes Documentation AI-Ready: Structure

It’s that time again when I refresh my API Documentation course for the next term. In the course description, I emphasize that I’m teaching foundation skills to write good API documentation, not the latest tools. Students need to use tools to produce documentation, so I teach how to learn new tools while teaching those foundation skills. They learn GitHub as the representative example—experiencing both a documentation tool and how foundation skills look when instantiated as actual documentation.

After adding AI tools to the course last year, each refresh has taken more time. AI technology advances faster than I can teach the course, creating real tension: should I keep focusing on fundamentals, or chase the latest AI capabilities?

A recent Write the Docs discussion gave me the answer I needed. The conversation started with an article promoting DITA as the solution to produce AI-readable documentation. It sparked debate about tools and approaches and revealed something more fundamental: AI tools don’t need anything new to read documentation. They need the same structural principles that information scientists and technical writers have relied on for decades.

The fundamentals aren’t outdated. They’re just being rediscovered by each new technology.

What the discussion revealed

An article titled Why AI Search Needs Intent (and Why DITA XML Makes It Possible) kicked off the Write the Docs discussion by promoting DITA (Darwin Information Typing Architecture) as the solution for making documentation AI-readable. The article’s premise is solid: users often can’t articulate what they need, and structured content helps guide them to answers.

Our discussion consensus: structure is definitely important, but we had less certainty about DITA as a requirement to provide that structure. DITA does enforce topic structure and information architecture, but it requires significant overhead in authoring tools, training, workflow changes. I’ve used DITA-like tagging at several organizations. It’s not trivial. Other approaches can achieve similar results: a Markdown-based system with consistent templates and frameworks like Diataxis, for example.

Continue reading “What Makes Documentation AI-Ready: Structure”

Reflections on my first AI-enhanced API documentation course

Now that the last quarter has ended and the holidays are behind us, I finally have a chance to reflect on the API documentation course I taught last fall. Last summer, I integrated AI tools into the course that I’d taught for two years. What I discovered: when students can generate content faster with AI tools, your evaluation process needs to keep pace, or you won’t catch problems until students have already repeated them multiple times. I suspect this applies to any course where instructors encourage AI use without rethinking their feedback loops.

I encouraged students to use AI tools and they did. They were generating content faster than in previous course iterations. What I didn’t anticipate: they hadn’t completely installed the linting tools that were part of the authoring system. Their use of AI tools let them produce assignment after assignment without applying the linting tools that should have caught basic errors. In one module with three assignments, more than two-thirds of students submitted work with style violations, such as lines exceeding 100 characters, missing blank lines around headings. Their local linters should have flagged these immediately, but their linters weren’t installed or weren’t working correctly, and I didn’t discover this until after they’d submitted all three assignments with the same problems repeated.

In previous iterations of the course without AI tools, students generated content slowly enough that I’d evaluate early assignments and catch tool issues before they could compound. Adding AI tools to the mix changed the velocity assumptions. Because of the additional AI-related curriculum, I grouped these three writing assignments into a single module. By the time I evaluated that module and saw one error, it had already been repeated three times—per student.

Continue reading “Reflections on my first AI-enhanced API documentation course”

Tech writing: dealing with changes for centuries

15th-century technical writers in library

The panic is familiar. New technology arrives, threatens to automate away our jobs, and suddenly everyone’s scrambling to figure out what skills will matter in five years. Sound like the current AI conversation in technical writing circles.

Rahel Anne Bailie posted a summary of changes in technologies that technical writers have dealt with over the past few decades. But technical writers have been navigating this exact disruption for centuries.

Think about it. In 500 years, anyone reading the words of technical writers today will wonder what the obsolete vocabulary means and what marvels a document titled “Installing Ubuntu” might hold. Who will remember Ubuntu in 500 years?

Now flip it around. Look at documents from alchemists 500 or 1,000 years ago and think about who wrote them. Some were written by subject matter experts, others by professional writers skilled at applying the tools of their day to record the knowledge of their day.

The pattern repeats: media evolves from stone tablets and chisels to quill pens and parchment, to movable type and printing presses, to desktop publishing and websites. The tools change.

What actually stays constant

Over the centuries, tech writers have been learning the technology they are documenting (alchemy, radar, APIs, what have you) and writing to specific audiences (wizards, technicians, software developers, and so on). Names change, but the guiding principles have changed very little.

What remains important to remember, and communicate, is the value that the scribes and writers bring to the customer experience.

Why the AI panic misses the point

AI represents another tool shift, not a fundamental change in what technical writers do. While the introduction of AI into the field came on a bit like a bull in a China shop, with the initial message along the lines of “Outta my way! AI is here to save the day!” Now that the dust from that storm has settled, we can see that the real question isn’t whether AI will replace technical writers—it’s which technical writers will adapt their skills to work effectively with AI, just as previous generations learned to work with desktop publishing, content management systems, and web technologies.

Continue reading “Tech writing: dealing with changes for centuries”

Tips for conducting documentation research on the cheap

In my previous post, I presented some experiences with testing and the resulting epiphanies. In this post, I talk more about the process I applied.

The process is simple, yet that’s what makes it difficult. The key to success is to take it slow.

The question

Start with something simple (and then simplify it). Your first questions will invariably be too big to answer all at once, so think, “baby steps.”

Instead of asking, “How can we improve our documents?” I asked, “What do users think of our table of contents (ToC)?” Most users don’t care about how we can improve our docs, unless they’re annoyingly bad, so they don’t give it much thought. They do, as we found out, use the ToC, and we learned that it wasn’t in a way that we could count.

The sample

Whoever you can get to sit with you. Try to ask people who are close to your target audience, if you can, but anyone who is not you or in your group is better than you when it comes to helping you learn things that will help you answer your question.

The process

Listen with a curious mind. After coming up with an answerable question, this is the next hardest thing to do—especially if people are reviewing something that you had a hand in writing or making.

Your participants will invariably misinterpret things and miss the “obvious.” You’ll need to suffer through this without [too much] prompting or cringing. Just remind yourself those moments are where the learning and discovery happen (after the injuries to egos and knees heal, anyway).

When the participant asks for help, such as, “where’s the button or link to do ‘X’?” A trick I learned from more experienced usability testers is to ask them, “where do you think it should be?” That way you learn something about the user experience, rather than just finishing the task without learning anything. If they’re still stumped, you can help them along, but only after you’ve learned something. Remember, you’re there to learn.

Continue reading “Tips for conducting documentation research on the cheap”

Documentation research requires more curiosity than money

Sure, money helps, but success doesn’t always correlate with dollars spent.

Here are a couple of examples that come to mind from my experience.

piClinic research

My favorite research success story (perhaps because it turned out well) occurred while I was researching the piClinic project. While on a medical mission to a rural clinic in Honduras, I saw a mountain of paper patient records with a lot of seemingly valuable information in them that could never be tapped. Clearly (to me) computerizing those records would improve things. I felt that, based on my first-hand experience, automating record storage would make it easier to store and retrieve the patient records.

It would, and later, it did.

But…

When I later actually sat down and interviewed the target users and watched what they did during the day and, more importantly, during the month, I learned that what I thought was their biggest obstacle, storage and retrieval, was not really a problem for them.

It turned out that the real time-consumer in their process was reporting the data to the regional health offices from these documents. Each month, each clinic would spend 2-3 days doing nothing but tabulating the activity of the clinic in their reports—something I hadn’t seen for myself in my earlier, more limited, experiences.

My assumption that storage was the problem to solve died during that research. So, I pivoted the design of the piClinic app to focus on reporting (as well as the storage and retrieval necessary to support that) to reduce their monthly reporting time from days to minutes.

Continue reading “Documentation research requires more curiosity than money”

I love it when things just work

Bob Watson piloting a light plane on a sunny day as it approaches the runway to land

The image is a still frame from a video I pulled out of my archive to edit and an example of things just working–I’m on the final approach to a silky touchdown at Orcas Island airport.

In user experience parlance, they call that customer delight. I recently had some experiences as a customer that delighted me. It was amazing!

I hope that my readers get to experience similar delight when they read my docs. Let’s unpack these recent delights to see how they might help improve my writing.

The experiences

It really started with a recent disappointing purchase experience, but first some back story.

About 20 years ago, I used to edit videos, among other things. Back then, computers took a lot of tuning (i.e. money) to meet the processing demands of video editing and effects. After several software and hardware iterations, I finally had a system that had the industry standard software running on a computer that could keep up with the challenge of video editing.

With that, I could finally focus on the creative and productive side of editing without having to fuss with the computer all the time. It’s not that I minded fussing with the computer–after all, that’s what I had been doing all along to get to this state of functionality and reliability. Rather, I don’t like fussing with it when I have other things that I want to accomplish.

It was truly a delight to be able to focus on the creative and productive aspects of the job. Having reliable tools made it possible to achieve flow. If you’ve ever achieved that state, you know what I mean. If not, read Finding Flow: The Psychology Of Engagement With Everyday Life by Mihaly Csikszentmihalhi.

Fast forward to this past week.

I finally upgraded my very-consumer-y video editor (Pinnacle Studio) to edit some home videos. I’d used an earlier version a few years back and I recall it having a pretty low learning curve for what I wanted to do. But my version was getting stale, and they were having a sale, so…

I paid my money, got my download, and was ready for the delight to begin!

Not so fast. There would be no delight today.

Continue reading “I love it when things just work”

The documentation cliff

For the past couple of months, I’ve been refactoring the piClinic Console software to get it ready for this summer’s field tests. Along the way, I encountered something I’d seen before, but never really named, until recently.

The documentation cliff.

A documentation cliff is where you get used to a certain level of documentation quality and support as you embark on your customer journey to use a new API and then, somewhere along the way, you realize that level of support has disappeared. And, there you are, like Wile-E-Coyote, floating in air, looking back at the cliff and looking down at where you are about to fall in the next instant.

Just kidding. What really happens is that you realize that your earlier plans and schedule have just flown out the window and you need to refactor the remainder of your development plan. At the very least, it means you’re going to have some uncomfortable conversations with stakeholders. In the worst-case scenario, you might need to re-evaluate the product design (and then have some uncomfortable conversations).

Most recently, this happened to me while I was using Postman to build unit tests for the piClinic Console software. I don’t want this to sound like I don’t like Postman–quite, the contrary. I love it. But that just makes the fall from the cliff hurt that much more.

How I got to the cliff

In my case, the tool was easy to get started with, the examples and tutorials were great, the online information was helpful–all the things that made a very productive on-boarding experience. So, I on-boarded myself and integrated the product into my testing. In fact, I made it the centerpiece of my testing.

Continue reading “The documentation cliff”