Who cares if AI reads your docs correctly?

Dachary Carey, a senior technical writer at MongoDB, cares. A lot! She has been running research into how AI tools interact with technical documentation. Her approach is experimental: she probes how AI agents actually behave when they go reading docs, rather than taking reports from developers and other experts at face value.

When Dachary first published her findings, they caught the technical writing community off guard. Her follow-up research has shown how pervasive and inconsistent the behavior is. While some tools indicate how much of the content they read, most don’t. It’s not a quirk. When asked, Claude confirmed that it’s a common practice, in part to minimize token use.

For tech writers, that’s a problem worth understanding. When an AI agent skips content and nothing flags it, the AI tool’s answer comes back confident and looking complete. Neither the end user nor the developer who built the tool has any way of knowing what got left out because the failure mode is silent. In fact, it’s not even considered a failure.

Dachary’s research revealed that getting data from a web page to an AI tool’s answer involves more steps and more moving parts than many technical writers realized. That complexity is worth unpacking, because buried in it is a question with some uncomfortable answers: who in that process actually has a reason to care whether the AI gets it right?

Continue reading “Who cares if AI reads your docs correctly?”

Teaching AI in Tech Writing: One Year In

A year of working with AI feels like five years of regular thinking. Is that true for you too, or is that just me?

I’ve spent the past year integrating AI into my API documentation course.

  • Feb 2024: Start experimenting with use cases
  • April 2024: Teach the last non-AI version of the course
  • July 2024: Formal lesson planning updates
  • Sept 2024: First AI-enhanced term
  • Dec 2024: Reflect and revise
  • Feb 2025: Prep for next iteration

So, the AI component of API documentation has been swirling around in my head for at least a year now. Here’s what that year of experimentation revealed.

Three categories of AI in technical writing

Last summer, I drafted three categories for thinking about AI interaction in technical writing:

  1. AI supporting content creation and management – Tools that help you write, edit, organize, and maintain documentation
  2. AI generating and publishing content – Tools that create documentation with minimal human intervention
  3. AI reading your content – How AI tools consume and use your documentation in other contexts

I’ve tested all three in the course. Here’s what I’m learning about each.

Category 1: AI supporting content

Students in the last term were encouraged to experiment with their AI assistant throughout the course. They discovered where it helped and where it didn’t—sometimes intentionally, sometimes by accident.

For next term, I’m making this discovery more structured. Students will systematically test their AI tool on specific tasks: outlining topics, reviewing their drafts, explaining concepts they’re documenting, generating test cases for code examples.

The challenge: AI still has no user guide. AI tools don’t document their own limitations. Students need to test them systematically on specific tasks to discover where they help and where they fail. That’s what we’ll work on next term.

Category 2: AI generating content

I’m still skeptical about unmonitored AI content generation. I’m watching my opinion on this closely because I suspect I’ll need to reconsider it soon. For now, I don’t think AI is ready for write access to the repo.

Continue reading “Teaching AI in Tech Writing: One Year In”

Building my first MCP server: what I learned from the inside out

I had been seeing “MCP” everywhere and I had no real idea what it was.

That’s not a comfortable admission for someone who teaches API documentation in the age of AI tools. I know what protocols are. I know what servers are. I’ve been connecting systems to other systems for a long time. And yet every time I read an explanation of MCP, something wasn’t clicking.

So, I built one.

Why reading about it wasn’t enough

The official description is accurate: MCP is the interface for an AI tool to connect to other services. The analogies help too. The official description compares it to USB-C for AI, a universal adapter that lets any model plug into any tool or data source without custom wiring for every combination.

All of that is true, yet none of it helped make it feel real to me.

I realized that I had the same experience with the Internet of Things (IoT) six years ago. I’ve spent years connecting devices to computers, but IoT, literally connecting devices to computers over the Internet, left me scratching my head. Leaning into my hands-on learning style, I built a couple of IoT applications and went on to write API documentation about the AWS IoT service for a couple of years.

As with my IoT experience, to get to the bottom of this MCP mystery, I needed a problem to solve. And there it was, right in front of me (and it’s right in front of you, too): my website.

My website has an API and 169 articles that I’ve published with no easy way to analyze how they connect to one another. I was curious whether an MCP server could help. That seemed like a safe place to start (read-only access, real data, actual use case) and concrete enough to move from theory to something running on my laptop.

I asked Claude what a content-focused MCP server could do with a site like mine. The ideas came back quickly in a list of things that normally require manually exporting the data or writing a one-off script to pull it out.

  • Content graph analysis
  • Gap identification
  • Cross-reference suggestions
  • Tag and category hygiene.

With an MCP server, these become tools Claude could call during any conversation.

This is getting interesting. An MCP server could add data from my website to a Claude conversation.

Time to get hands-on.

Continue reading “Building my first MCP server: what I learned from the inside out”

What Makes Documentation AI-Ready: Media

In previous posts, I’ve explored how structural principles, writing quality, and code example pedagogy that serve human readers also serve AI tools. Those posts focused on text-based content found in all technical documentation.

This post addresses media: images, videos, diagrams, and audio. These elements appear less frequently in API documentation than in other technical content, but they’re valuable for tutorials, demonstrations, system architecture explanations, and physical procedures.

The patterns that make media accessible to AI tools aren’t new requirements. They’re the same accessibility standards that have served vision-impaired readers, users who have difficulty hearing audio content, and people with cognitive disabilities for decades. Once again, the fundamentals endure.

What accessibility standards tell us about media

The Web Content Accessibility Guidelines (WCAG) have provided clear requirements for accessible media since the late 1990s. These requirements weren’t created for AI—they were created for people who can’t see images, can’t hear audio, or need alternative ways to process information. To address these cases, they recommend:

For images:

  • Provide text alternatives (alt text) that describe content and function
  • Include longer descriptions for complex images like diagrams
  • Don’t rely on color alone to convey information
  • Ensure text in images is also available as actual text

For video:

  • Provide synchronized captions for deaf and hard-of-hearing users
  • Provide audio descriptions for blind and vision-impaired users
  • Provide transcripts that include both dialogue and important visual information
  • Identify speakers in multi-person videos

For audio:

  • Provide text transcripts
  • Identify speakers
  • Note non-speech sounds that convey meaning

These requirements are the result of thirty years of refinement based on how people with disabilities use content.

How AI tools process images

I asked Claude how it processes images in documentation. The patterns it described align exactly with accessibility requirements.

Continue reading “What Makes Documentation AI-Ready: Media”

What Makes API Documentation AI-Ready: Code Examples

In previous posts, I explored how structural principles and writing quality standards that serve human readers also serve AI tools. Those posts focused on the prose found in all technical documentation. API documentation presents an additional challenge: teaching concepts through code examples.

I asked Claude what makes code examples easy for an AI tool to process. The patterns it identified align with established principles from instructional design and technical communication pedagogy. Once again, the fundamentals endure—whether you’re teaching human developers or enabling AI tools to extract meaning from documentation.

Code presentation patterns that help processing

Claude identified the following helpful patterns for presenting code in documentation.
I’ve added comments in italics to Claude’s conversation for additional context and clarification.

Syntax highlighting with language specified

```javascript
// Language tag makes intent clear
const result = await fetchData();
```

versus unmarked blocks requiring inference

Complete, runnable examples

  • Includes necessary imports and dependencies
  • Shows actual input and expected output
  • Avoids fragments with ellipses everywhere

I discuss the nuance of this later in the article.

Inline comments explaining non-obvious logic

```python
# Convert UTC to user's local timezone before display
local_time = utc_time.astimezone(user_tz)
```

Variable names that indicate type and purpose

  • userConfig vs data
  • isAuthenticated vs flag
  • errorMessage vs msg
Continue reading “What Makes API Documentation AI-Ready: Code Examples”

What Makes Documentation AI-Ready: Quality Writing

In my previous post, I explored how document structure principles that have served human readers for decades serve AI tools just as well. That post focused on structure and metadata, skipping the actual words within that structure.

I asked Claude what writing patterns make content easy or difficult to process. The principles it listed match the guidelines in the technical communication textbook I used teaching undergraduates. The fundamentals endure, whether you’re writing for human readers or AI tools.

Text patterns that help processing

These patterns reduce ambiguity and make relationships between ideas explicit. They help both AI tools and human readers follow your logic and find the information they need.

Claude identified these helpful writing patterns, which align with established technical communication principles:

Clear topic sentences

  • First sentence of paragraphs that state the main point
  • Reduces ambiguity about what follows

Explicit connectives

  • “However,” “Therefore,” “For example,” “In contrast”
  • Signal logical relationships between ideas

Defined terms on first use

  • “React (a JavaScript library)” vs just “React”
  • Especially important for acronyms and jargon

Concrete nouns over pronouns

  • “The authentication token” vs “it”
  • Critical when text is excerpted or out of context

Consistent terminology

  • Using “user session” throughout vs switching between “session,” “user context,” “active connection”
Continue reading “What Makes Documentation AI-Ready: Quality Writing”

What Makes Documentation AI-Ready: Structure

It’s that time again when I refresh my API Documentation course for the next term. In the course description, I emphasize that I’m teaching foundation skills to write good API documentation, not the latest tools. Students need to use tools to produce documentation, so I teach how to learn new tools while teaching those foundation skills. They learn GitHub as the representative example—experiencing both a documentation tool and how foundation skills look when instantiated as actual documentation.

After adding AI tools to the course last year, each refresh has taken more time. AI technology advances faster than I can teach the course, creating real tension: should I keep focusing on fundamentals, or chase the latest AI capabilities?

A recent Write the Docs discussion gave me the answer I needed. The conversation started with an article promoting DITA as the solution to produce AI-readable documentation. It sparked debate about tools and approaches and revealed something more fundamental: AI tools don’t need anything new to read documentation. They need the same structural principles that information scientists and technical writers have relied on for decades.

The fundamentals aren’t outdated. They’re just being rediscovered by each new technology.

What the discussion revealed

An article titled Why AI Search Needs Intent (and Why DITA XML Makes It Possible) kicked off the Write the Docs discussion by promoting DITA (Darwin Information Typing Architecture) as the solution for making documentation AI-readable. The article’s premise is solid: users often can’t articulate what they need, and structured content helps guide them to answers.

Our discussion consensus: structure is definitely important, but we had less certainty about DITA as a requirement to provide that structure. DITA does enforce topic structure and information architecture, but it requires significant overhead in authoring tools, training, workflow changes. I’ve used DITA-like tagging at several organizations. It’s not trivial. Other approaches can achieve similar results: a Markdown-based system with consistent templates and frameworks like Diataxis, for example.

Continue reading “What Makes Documentation AI-Ready: Structure”

Reflections on my first AI-enhanced API documentation course

Now that the last quarter has ended and the holidays are behind us, I finally have a chance to reflect on the API documentation course I taught last fall. Last summer, I integrated AI tools into the course that I’d taught for two years. What I discovered: when students can generate content faster with AI tools, your evaluation process needs to keep pace, or you won’t catch problems until students have already repeated them multiple times. I suspect this applies to any course where instructors encourage AI use without rethinking their feedback loops.

I encouraged students to use AI tools and they did. They were generating content faster than in previous course iterations. What I didn’t anticipate: they hadn’t completely installed the linting tools that were part of the authoring system. Their use of AI tools let them produce assignment after assignment without applying the linting tools that should have caught basic errors. In one module with three assignments, more than two-thirds of students submitted work with style violations, such as lines exceeding 100 characters, missing blank lines around headings. Their local linters should have flagged these immediately, but their linters weren’t installed or weren’t working correctly, and I didn’t discover this until after they’d submitted all three assignments with the same problems repeated.

In previous iterations of the course without AI tools, students generated content slowly enough that I’d evaluate early assignments and catch tool issues before they could compound. Adding AI tools to the mix changed the velocity assumptions. Because of the additional AI-related curriculum, I grouped these three writing assignments into a single module. By the time I evaluated that module and saw one error, it had already been repeated three times—per student.

Continue reading “Reflections on my first AI-enhanced API documentation course”

I’m not an AI hater. Really!

After a particularly tiring vibe coding session with Claude, I shared some of the resulting grumpiness in a few posts to the Write the Docs slack. I’ll confess. I’m probably not AI’s (as the term is currently bandied about) biggest fan. But I’m not a hater. I’m just, well, disappointed with it. It’s just not living up to its purported potential (a.k.a. hype, these days).

I’d been writing code (i.e. prompting Claude to write code for me, a.k.a. vibe coding) and Claude’s code was getting buggier and buggier. I’ve seen that happen before after it’s written a lot of code. It acts like it’s tired, but I think it’s due to having too many things to keep track of in the conversation, so it loses its place (I’m not an AI psychologist…yet).

In any case, I was beginning to wonder if it would have been faster to just start typing my own code (it wouldn’t), but I wanted to see how it played out. Eventually, after Claude had gotten stuck, again, I was troubleshooting in parallel and suggested a fix. Lo and behold, Claude agreed (as always). With that experience still fresh in my head, I went to the Write the Docs slack to see how others were faring in their AI journeys.

Thinking of past bubbles

In one post, I compared current AI hype to the hype I recall when PCs (as in IBM PC) came out in the early 80s. They promised the moon and in microscopically fine print, mentioned that “some assembly was required.” Sometimes some C, as well. (If you know, you know.)

In the 80s, it’s not that PCs weren’t amazing pieces of technology that could fit on your desk and still have room for your phone and desk blotter. Remember, this was a time when computer hardware had to have its own air-conditioned office. It’s that they lacked the “killer app” (the application that solves a high-value problem for a large audience), until Lotus 1-2-3 & Multiplan, two of the first spreadsheet apps), came out and ran on the PC.

Those killer apps transformed the PC from a geeky novelty to an absolute necessity. They enabled regular people to see the value that these machines could provide. Fast-forward to today, that’s how AI seems to be positioned: A novel (to most non-AI researchers and developers) technology waiting for its “killer app.” Just like the PC could do anything, but nobody cared until it did something useful. It’s hard to tell what AI’s killer app will be until we see it. If I knew what it was, I wouldn’t be here writing another blog post; I’d be working on AI’s killer app!

Continue reading “I’m not an AI hater. Really!”

AI reflects on my reflection

After publishing my last post, I asked Claude what it thought about it, using the “reflective blog post” prompt I’ve been developing. It appears that we have some more work to do, indeed.

Claude reviewed my post in terms of:

  • Authenticity
  • Professional stakes and relevance
  • Voice and intellectual honesty
  • Process documentation quality
  • Factual accuracy
  • Overall assessment

Basically, I got a report card from my mechanical English teacher. Let’s see how I fared…

Authenticity

Evaluates whether the uncertainty and professional challenges feel genuine rather than manufactured for content purposes.

Let’s see what my strengths in Authenticity are, according to Claude:

  • Genuine uncertainty: The author demonstrates real ambivalence about AI collaboration without rushing toward resolution (“I’m not ready to abandon my AI-overlord, er, I mean co-writer, just yet”)
  • Honest stakes: Clear professional and personal impact – less satisfaction, feeling like work, questioning writing identity
  • Specific observations: Concrete details about the back-and-forth editing process, watching drafts transform, losing sense of accomplishment
  • Temporal authenticity: Uses present-tense investigation language (“What I think this really means…”) and frames conclusions as provisional

OK. So far, so good. What does Claude think I need to work on?

  • Limited process transparency: Doesn’t show systematic methodology for investigating this challenge beyond informal reflection
  • Missing specifics: Could benefit from more concrete examples of the “soul removal” process or specific editing exchanges

Perhaps. However, I’m not convinced those are appropriate in a reflection paper. It could be that we have a difference of opinion on what a reflection paper should include. Or, perhaps, adding more of this information would make it stronger. In retrospect, I think I’m still figuring out those aspects of my writing, so I’m still not quite sure what to say about them.

I’ll give Claude credit for identifying some inner workings that I need to process.

Professional stakes and relevance

Examines how clearly the article demonstrates real workflow impact and broader applicability to others in the field.

Continue reading “AI reflects on my reflection”