A year of working with AI feels like five years of regular thinking. Is that true for you too, or is that just me?
I’ve spent the past year integrating AI into my API documentation course.
- Feb 2024: Start experimenting with use cases
- April 2024: Teach the last non-AI version of the course
- July 2024: Formal lesson planning updates
- Sept 2024: First AI-enhanced term
- Dec 2024: Reflect and revise
- Feb 2025: Prep for next iteration
So, the AI component of API documentation has been swirling around in my head for at least a year now. Here’s what that year of experimentation revealed.
Three categories of AI in technical writing
Last summer, I drafted three categories for thinking about AI interaction in technical writing:
- AI supporting content creation and management – Tools that help you write, edit, organize, and maintain documentation
- AI generating and publishing content – Tools that create documentation with minimal human intervention
- AI reading your content – How AI tools consume and use your documentation in other contexts
I’ve tested all three in the course. Here’s what I’m learning about each.
Category 1: AI supporting content
Students in the last term were encouraged to experiment with their AI assistant throughout the course. They discovered where it helped and where it didn’t—sometimes intentionally, sometimes by accident.
For next term, I’m making this discovery more structured. Students will systematically test their AI tool on specific tasks: outlining topics, reviewing their drafts, explaining concepts they’re documenting, generating test cases for code examples.
The challenge: AI still has no user guide. AI tools don’t document their own limitations. Students need to test them systematically on specific tasks to discover where they help and where they fail. That’s what we’ll work on next term.
Category 2: AI generating content
I’m still skeptical about unmonitored AI content generation. I’m watching my opinion on this closely because I suspect I’ll need to reconsider it soon. For now, I don’t think AI is ready for write access to the repo.
Here’s what changed my thinking slightly: Claude helped me add GitHub automation to our course repo. Students in the next term will experience a mini CI/CD (continuous integration/continuous deployment) pipeline with their documentation contributions. The new workflows will check for common errors and provide immediate feedback, rather than waiting for me to provide it a week or two after they turn it in.
The distinction that’s emerging: AI tools are still too unpredictable for production documentation where deterministic output is important. But AI can create deterministic code that is suitable for use in production. Claude and I worked out one automation solution that works. That it couldn’t write the same exact code if I asked again doesn’t matter—we found one solution that works reliably, and that’s sufficient.
One caution this confirmed: it’s still risky to ask AI to do something you don’t already know how to do. In a couple of instances, Claude confidently coded us into a corner where we got stuck. I had to coach it back out to get things running again. If I didn’t know what the code was doing and how it was doing it, we’d have both been stuck in that corner.
Suffice to say that we’re still working on our trust issues.
Category 3: AI reading your content
I just wrapped up a four-article series exploring how to create content that AI tools can process effectively. The short version:
Follow accessibility guidelines with a passion.
That could have been a tweet.
Instead, four articles later, I provided specific guidance on structure, writing quality, code examples, and media—showing what AI tools do with your content and why the patterns that help AI are the same ones that help human readers.
The series reinforced the fundamentals of good technical writing and how they serve both audiences. You don’t need new practices to write for AI. You need to apply the established practices consistently, perhaps even more so for your AI audience.
Coincidentally, in the time between when I published the articles on writing for AI and this post (about 4 days), a couple of well-researched articles on how AI agents find and use content were published. They both agree that good content is valued by the AI agents and described the details in content and organization that made big differences for AI agents. Be sure to check them out!
I’m happy that we’re seeing more technical-writing focused research on the intersection of AI and technical documentation. Having a lot of this type of research to keep up on is a good problem to have.
Three principles for applying AI
Alongside the categories, I developed three principles for evaluating when and how to use AI tools:
- Match the tool to the task – Not every problem needs an AI solution, but AI might help you find a solution
- Measure the outcome, not the output – Focus on whether it solves the actual problem
- User success beats tool efficiency – Optimize for learning/understanding, not speed
These principles have helped guide me through this past year of experimentation.
Example: In November I spent most of the month grading October assignments. Through a sequence of poor assignment planning on my part colliding with some unexpected events, I couldn’t correct some mistakes in students’ work in a timely manner. This resulted in a backlog of rework for all of us.
My first thought, after getting out from under this self-imposed mess, was to enlist my AI partner. “I’ll just send assignments and rubrics to AI for grading!”
That musing lasted all of five seconds. Wrong tool for the problem.
The root cause of this mess wasn’t evaluation speed—it was that students needed feedback on fundamental errors earlier in the process, before they repeated the same mistakes.
A better solution: add GitHub workflow automation to check for common structural errors and provides immediate feedback to the students. The automation will catch the mechanical issues that prevent evaluation and I’ll give them feedback on their writing much sooner.
This applies all three principles:
- Matched tool to task – Automation for pattern detection, human judgment for writing quality
- Measured outcomes – Will students get feedback when they need it and improve their work?
- User success – Does this help students learn, not just help me grade faster?
We’ll see how it works for everyone in the next term.
What I’m watching
My thinking on content generation is evolving. The line between “AI helps me write” and “AI writes for me” keeps shifting. I’m watching where that boundary settles, both in my teaching and in professional practice.
I’m also watching how students develop judgment about when to use AI and when to think through problems themselves. That metacognitive skill—knowing when the tool helps versus when it interferes—might be more important than any specific AI technique.
The categories and principles seem stable. They help me make decisions about AI tools in my teaching work and I’ll be watching how they help students make similar decisions in their documentation work.
Where things stand
A year into this experiment, the categories have helped me think systematically about where AI fits. The principles help me evaluate whether it’s working and both frameworks have held up through their first year of actual use in the curriculum.
But I’m still figuring out important pieces. The line between AI assistance and AI generation keeps shifting. I’m watching how students learn to judge when AI helps versus when it gets in their way. I’m also trying to update and distill the guidelines that will help future API documentation authors write about new technologies for new audiences.
Next term will put the GitHub automation to the test, refine the structured AI evaluation exercises, and probably reveal new questions I haven’t thought to ask yet. That’s how this goes—each iteration clarifies some things and surfaces new uncertainties. Or, as they’ve been saying for decades, now, just another day in tech.
I’ll report back on what I learn.
