Teaching technical writing in an AI world

A lot has changed in tech writing during the past two years when it comes to LLM tools (a.k.a. AI). That time frame coincides with my tenure teaching API documentation and watching how my students adapt to these tools has given me some insight into how our profession is evolving.

More to the point, it’s forced me to develop a systematic approach to AI integration for my next course.

The challenge: Teaching moving targets

When I started teaching API documentation in spring 2024, LLM tools felt like the “Apple II” stage of PC evolution. Interesting, but not quite ready for serious work. My students were “cautiously skeptical,” and treated AI as a curiosity rather than a necessity.

Some students used LLM tools to help create rough drafts while others wanted to avoid the AI tools to get a more hands-on experience.

That changed rapidly. By the third course, students weren’t asking whether to use AI, they were asking how to use it effectively. The industry had moved very quickly, and my students needed practical frameworks, not philosophical debates, to confront this new reality.

What I learned from watching students evolve

Rather than ban AI tools, I decided to lean into them and watch what happened. I asked students to describe if, and how, they used AI tools in their assignments. This gave them a record for their portfolio presentations, but it also created an informal longitudinal study of AI application in technical writing education. Here’s a summary of what I observed:

Course 1: Supplementary usage

  • Students used AI to explain unclear concepts
  • Occasionally used AI for rough draft generation
  • Were generally cautious and experimental

Course 2: AI as a study partner

  • Expressed less timidity towards using AI
  • Used AI more extensively for draft creation
  • Used AI for research and information gathering
  • Demonstrated a growing confidence and some increasing frustration

Course 3: Experimental usage and broader application

  • AI use seemed more common than the exception
  • Students described mixed results with larger AI tasks
  • Students recognized that AI had limitations, but they weren’t well understood

The most significant insight: Students who struggled with AI weren’t necessarily using it wrong, but they lacked a framework to understand what AI could and couldn’t reliably do. In several cases, some seemed frustrated and disappointed when they ran into a limitation unexpectedly.

A framework I plan to test

Based on these observations and my research into AI applications, I developed a three-category framework to organize the use-cases to help the next course:

Category 1: Content Support (AI’s sweet spot)

Content Support includes the tasks necessary to know what to write in the documentation and how to write it. Tasks such as:

  • Research and analysis
  • Audience persona development
  • Content gap identification
  • Editing and revision suggestions
  • Style and tone consistency checks

This is where the evidence suggests AI excels. Tom Johnson’s extensive writing on the subject shows that most successful AI applications in technical writing fall into this category.

My planned approach: Encourage students to use AI for analyzing user feedback, suggesting content improvements, and validating their understanding of complex technical concepts before writing.

Category 2: Content Generation (Proceed with caution)

Content Generation includes creating and publishing the final content:

AI can generate drafts, but my testing shows consistency remains problematic. When I tested the same prompt on the same API repeatedly, the resulting documentation ranged from quite useful to quite vacuous. The presence of inconsistency, and the range of inconsistency, surprised me. Tightening up the prompts only made it worse.

The reality: We already have reliable tools for consistent content generation. GitHub Pages, docs-as-code workflows, and established authoring tools outperform AI for publication consistency.

My planned approach: Guide students to use AI for initial brainstorming and rough documentation structure (what the LLM tools are good at). After producing a good first draft, rely on the proven version control and publishing tools to let the human expertise drive the final content and subsequent revisions.

Category 3: Content Retrieval (Supporting the readers)

Content Retrieval includes how users find and interact with documentation:

This represents today’s “SEO” challenge. As with the SEO optimization efforts of the past, tech writers are still supporting how the audience finds the content through content design and metadata.

Designing for dual audiences

AI tools and human readers seem to want the same information, but they access it differently. AI tools can consume an llms-full.txt file that contains your entire documentation in one structured format, while humans rely on tables of contents, topic headings, and navigation links to build their understanding progressively.

Both audiences want complete, accurate, and well-organized information, although their access patterns are fundamentally different. Understanding this distinction helps technical writers design content that serves both audiences effectively.

My planned approach: Teach students to structure content for both human readers and AI-powered search systems, recognizing that user success depends on findability as much as quality, whether that “user” is human or artificial.

Three principles I plan to emphasize

To provide more general guidance, I’ll focus on three core principles:

Principle 1: Match the tool to the task

Don’t use AI for what it can’t reliably do.

Before applying AI to any task, students need to understand its capabilities and limitations. AI excels at analysis, brainstorming, and iteration. My research suggests (and the AI tools say this about themselves) that it struggles with consistency, accuracy verification, and complex reasoning.

In class, we’ll review what AI tools can and can’t reliably do and the reasons why. Students will then discuss these observations and share their personal experiences.

Principle 2: Measure the outcome, not the output

If AI doesn’t improve the results, stop using it.

I plan to teach students to evaluate whether AI-assisted work improves user outcomes, not just reduces production time. Sometimes the fastest path leads to unhelpful documentation.

We’ll discover how this works out through frequent peer reviews of the student’s work.

Principle 3: User success beats tool efficiency

Optimize for outcomes, not convenience.

The most seductive AI applications often prioritize writer convenience over reader success. I want to teach students to resist this temptation.

The goal of technical writing continues to be to help the reader, regardless of who, or what, wrote the content. In class we’ll have several usability-test assignments and we’ll also ask our “AI users” to be usability participants.

The research behind the plan

This framework is based on the patterns I observed in student behavior and supported by industry research in usability testing. There’s no evidence to support that the novelty of AI-tools negates years of user research methods. This framework takes the focus off the novelty of the tools and brings it back to our users and customers.

Getting out in front of the AI tools in my course will test my hypothesis that: students who understand these categories upfront will be more productive with the tools by avoiding many of the frustrations I witnessed in previous courses.

Testing the framework: My implementation plan

I’m planning to implement this framework organically in my next course by introducing the topics from this post and then weaving their application into the assignments as they progress through the curriculum.

My plan for the course is to start by getting to know the tools casually. Next, to help guide and constrain the AI tool’s responses, we’ll create master prompts that describe the content we’d like to see more specifically. This exercise also provides practice in speaking “AI,” that is, the prompt language. With that, the students will be prepared to start exploring and sharing their experiences as they develop their portfolio projects by applying all the tools they’ve learned and practiced with in the preceding lessons.

This is very similar to how they have used the tools in the past; however, it introduces them to the topics they’ve struggled with historically, in time to prepare them to meet the challenges more productively.

The bigger picture

My classroom observations align with my impressions of the broader industry trends: organizations that succeed with AI aren’t the ones deploying it most aggressively; they’re the ones deploying it most thoughtfully. Unfortunately, in a field so new, it’s hard to find reliable research and reporting on this.

I’m betting that students who learn to treat AI as a powerful research assistant and thinking partner, instead of a replacement for technical writing expertise, will be better prepared for the profession’s future.

What this could mean for technical writers

If this framework proves effective, it suggests the future belongs to technical writers who can integrate AI strategically while maintaining focus on user success. This requires new skills:

  • AI tool evaluation and selection
  • Prompt engineering and output validation
  • Hybrid workflow design
  • Outcome measurement and iteration

It also reinforces the timeless technical writing fundamentals: understanding your audience, organizing information effectively, and measuring success through user outcomes.

Conclusion: An experiment worth trying

Teaching technical writing in an AI world isn’t about choosing between human expertise and artificial intelligence, it’s about combining them effectively.

The three-category framework provides a starting point, but the real test will be whether it helps students stay focused on what’s important and asking, “How can I help my users succeed?” before asking “How can AI help me write faster?”

I won’t know how well this approach works until I’ve tested it with students. The research and observations point in a promising direction, however. If you’re using AI in technical writing work, I welcome your thoughts on how this framework might apply to your work before I test it in the classroom.

In any case, I’ll be sure to file an update after the course.

Leave a Reply