Now that the last quarter has ended and the holidays are behind us, I finally have a chance to reflect on the API documentation course I taught last fall. Last summer, I integrated AI tools into the course that I’d taught for two years. What I discovered: when students can generate content faster with AI tools, your evaluation process needs to keep pace, or you won’t catch problems until students have already repeated them multiple times. I suspect this applies to any course where instructors encourage AI use without rethinking their feedback loops.
I encouraged students to use AI tools and they did. They were generating content faster than in previous course iterations. What I didn’t anticipate: they hadn’t completely installed the linting tools that were part of the authoring system. Their use of AI tools let them produce assignment after assignment without applying the linting tools that should have caught basic errors. In one module with three assignments, more than two-thirds of students submitted work with style violations, such as lines exceeding 100 characters, missing blank lines around headings. Their local linters should have flagged these immediately, but their linters weren’t installed or weren’t working correctly, and I didn’t discover this until after they’d submitted all three assignments with the same problems repeated.
In previous iterations of the course without AI tools, students generated content slowly enough that I’d evaluate early assignments and catch tool issues before they could compound. Adding AI tools to the mix changed the velocity assumptions. Because of the additional AI-related curriculum, I grouped these three writing assignments into a single module. By the time I evaluated that module and saw one error, it had already been repeated three times—per student.
Continue reading “Reflections on my first AI-enhanced API documentation course”


