Reflections on my first AI-enhanced API documentation course

Now that the last quarter has ended and the holidays are behind us, I finally have a chance to reflect on the API documentation course I taught last fall. Last summer, I integrated AI tools into the course that I’d taught for two years. What I discovered: when students can generate content faster with AI tools, your evaluation process needs to keep pace, or you won’t catch problems until students have already repeated them multiple times. I suspect this applies to any course where instructors encourage AI use without rethinking their feedback loops.

I encouraged students to use AI tools and they did. They were generating content faster than in previous course iterations. What I didn’t anticipate: they hadn’t completely installed the linting tools that were part of the authoring system. Their use of AI tools let them produce assignment after assignment without applying the linting tools that should have caught basic errors. In one module with three assignments, more than two-thirds of students submitted work with style violations, such as lines exceeding 100 characters, missing blank lines around headings. Their local linters should have flagged these immediately, but their linters weren’t installed or weren’t working correctly, and I didn’t discover this until after they’d submitted all three assignments with the same problems repeated.

In previous iterations of the course without AI tools, students generated content slowly enough that I’d evaluate early assignments and catch tool issues before they could compound. Adding AI tools to the mix changed the velocity assumptions. Because of the additional AI-related curriculum, I grouped these three writing assignments into a single module. By the time I evaluated that module and saw one error, it had already been repeated three times—per student.

AI productivity creates an illusion of competence. The AI tools made it easy for the students to submit polished-looking documentation on schedule, with no reported errors. This gave them the impression that they had mastered both the principles and the tools.

Traditional evaluation assumes you’ll see struggles in early work through incomplete drafts, formatting problems, and questions about tooling. AI tools make it too easy to bypass those signals, especially when students are still learning what good documentation should look like. Without timely assessments and feedback on their assignments, neither they nor their instructors will know about a problem until after the habits have taken root.

The problem compounds because including AI-tools in the documentation process changes what ‘keeping up’ means. Evaluation cycles designed for human writing speed where instructors review assignment one, provide feedback, students apply feedback to assignment two, and so on, don’t work when students can finish three assignments before they receive feedback on the first.

The need for faster feedback became even clearer in student reflections. Some reported their AI tools helped resolve issues quickly; others hit one dead-end after another, leaving them frustrated. The discussion forums I set up provided a venue for students to share problems and solutions. Forum participation, however, was inconsistent, and by the time issues surfaced there, students had often already submitted work with some problems left unresolved.

For the next iteration, I’m testing two interventions: automated feedback to catch mechanical errors before submission and more structured use of discussion forums to surface tool problems sooner. Together, these should help students work more productively with AI tools while maintaining the tooling discipline the course requires.

Leave a Reply