Is your documentation ready for AI?

AI just explained my own research back to me. I was surprised by how it showed me the message I meant to say 10 years ago, but seemed to lose in the academic genre of the original articles.

I’m working on how to teach AI tools in my API documentation course. As part of my research, I thought I’d feed some of my technical writing articles from about 10 years ago into an AI tool, along with some contemporary work from others. I asked the AI to compare, contrast, and summarize them from various angles.

The results were interesting enough that I had to write about them here.

When AI becomes your editor

One summary took an analysis I’d buried in academic prose and flipped it into something useful. It linked the different documentation types commonly found in API Documentation to what readers are trying to accomplish:

Original version:

In Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites (2015), we proposed this list of goals that readers could hope to accomplish in an informational site (e.g. documentation). Examples of each goal were presented later in the 12-page article.

  • Reading to be reminded (Reading to do lite)
  • Reading to accomplish a task in a website (Reading to do here)
  • Reading to accomplish a task outside a website now (Reading to do now)
  • Reading to accomplish a task outside a website later (Reading to learn to do later)
  • Reading to learn (Reading to learn to use later or to apply with other information)

AI’s translation:

  • Recipes and examples for “reading to do now”
  • Topical guides for “reading to learn”
  • Reference guides for “reading to be reminded”
  • Support forums for edge cases and community help
  • Marketing pages for pre-usage evaluation

Then it got right to the point that I’d been dancing around for paragraphs:

Yet we typically measure them all the same way. Page views, time on page, bounce rate. That’s like using a thermometer to measure blood pressure. The tool works fine; you’re just measuring the wrong thing.

Ouch. But also: exactly.

The AI summary went on to suggest what matters for each content type:

  • Recipes: Task completion rates and copy-paste engagement rather than time on page
  • Learning content: Engagement depth scores and conceptual understanding indicators
  • Reference: Information location speed and “high-intent bounce rates” (quick exits after finding what you need)
  • Support forums: Resolution effectiveness and search-first success rates
  • Marketing: Qualified conversion rates and decision-maker engagement

It said in a bulleted list what I’d taken hundreds of words to communicate in the much more academic language of the article. This was a clear case of less being more. A lot more.

The practitioner-researcher divide (again)

This got me thinking about a persistent frustration in our field. I asked the AI to look at the gap between academic research and what practitioners do. Its response hit uncomfortably close to home:

[In] our industry: we prioritize simple, immediately applicable solutions over systematic methodologies, even when research suggests comprehensive approaches produce better long-term outcomes.

Having been a practitioner for more than a few years, I can relate. We’re always under deadline pressure, always needing something that works right now. But the AI’s observation stung because it’s true—and because I’ve been part of the problem.

The suggested solution was surprisingly practical:

The most successful research-to-practice transitions involve creating simplified entry points for complex methodologies. This means lightweight implementations with templates, tool integration, and demonstrated ROI case studies. Academic validation provides theoretical foundation; practical implementation guidance bridges the gap between research insight and everyday application.

The Diátaxis model is a good example that navigates this transition quite successfully.

What this means for documentation (and AI)

If AI can summarize and clarify our work this effectively, what does that mean for technical writers?

The immediate practical application is clear: document comparison and summarization are genuinely useful tools for technical writers. But we need to keep the AI honest about its sources and not let it hallucinate connections that aren’t there.

The bigger question is whether we’re ready for AI to accelerate our field’s evolution. Forty years ago, a database instructor I had said, “automation accelerates whatever direction you’re already heading. Sound processes get better; broken ones fail faster.”

This is as true today as when I heard it 40 years ago. Perhaps more so.

Why documentation analytics are more important than ever

If you haven’t figured out how to measure documentation effectiveness manually, you’re not ready to automate it with AI.

We still don’t have an agreed-upon framework that tells us whether documentation is doing what we’re paying it to do. We’re still arguing about page views versus task completion, still treating all content the same way, still measuring activity instead of outcomes.

If we do nothing else, we should at least tag each topic (or have the AI topic generator tag the content it generates) to identify its goal. If it has no identifiable goal, you might want to ask if the topic is even necessary. With that, the documentation will be ready for analysis after the fact.

The rush to apply AI tools to documentation without a valid feedback mechanism, such as valid analytics on content and performance, runs the risk of rushing off in the wrong direction.

If my early experimentation has taught me anything, it’s that AI tools need feedback to improve. If we don’t (or can’t) provide that feedback, it’s likely that deploying AI tools to our documentation will just make it fail faster.

The question isn’t whether AI will change technical writing. It will, but how will we keep it going in the right direction?

Leave a Reply