Testing antagonistic AI review on my own writing

I’m terrible at spotting flaws in my own work. As I iterate through multiple drafts and careful editing, I still tend to lose myself in the details and focus more on the trees than the forest. It’s not uncommon for me to miss weak arguments, unclear explanations, and assumptions that need defending—until after I publish, of course.

While preparing my previous post about AI writing workflows, I decided to test something: asking Claude to actively look for problems with my draft from an antagonistic perspective. Not just gentle feedback, but the kind of opposition you’d face from readers with strong opposing viewpoints.

The results were more brutal than I’ve been used to from Claude, yet they were more useful, than I expected. Here’s what I learned by using adversarial review to stress-test arguments, and why this technique shows promise despite some significant limitations.

The experiment setup

Close to publishing, I gave Claude this prompt:

Review the most recent draft as an antagonistic reader who would like to find any and all problems with the article (whether constructive or just to be antagonistic). This reader might have a pro-AI agenda to push or an anti-AI (as in no AI can be good) agenda.

I was looking for the kind of opposition that finds every possible angle of attack—the readers who come to your work already convinced you’re wrong and looking for ammunition to prove it.

Claude responded by creating and applying three distinct antagonistic personas:

  • The ideological opponent (challenges core assumptions and claims bias)
  • The pedantic critic (nitpicks definitions and demands citations)
  • The bad faith reader (misrepresents positions for rhetorical advantage)

Then it proceeded to tear into my article from both pro-AI and anti-AI perspectives. I had asked for this, and the responses were deliciously brutal. They were antagonistic in exactly the way I’d requested, and to be honest, genuinely useful.

What different types of opposition reveal

Claude organized the feedback by its perspective on AI. What follows is a summary of the feedback. You can see all the feedback in the transcript of the chat.

Continue reading “Testing antagonistic AI review on my own writing”

Finding the line between AI assistance and AI dependence

After six months of diving into AI tools, I’m still figuring out how to work with them without compromising my professional integrity. The old boundaries between my words and borrowed words don’t map cleanly onto AI assistance. When AI helps craft my prose, am I still the author? When my students use GPTs (a generative, pre-trained transformer, built on a large-language model, or LLM and more commonly known as an AI chat tool) to generate sophisticated responses, how do I know if they actually understand the material? The underlying tension in both contexts is the same: Where does human skill end and tool dependency begin?

I’ve been wrestling with this question through direct experience, using AI tools in my own writing and as I prepare to teach students about AI applications in technical writing scenarios. The uncertainty isn’t comfortable, but it’s productive. It’s forcing a clarity about professional standards that I previously took for granted.

The attribution gap

Traditional models for crediting intellectual work assume clear human sources. Plagiarism involves stealing and passing off ideas or words as one’s own without crediting the source. That creates a property line between what’s “my work” and what isn’t. The property line is fuzzy, but it’s recognized.

Work-for-hire contracts handle using another’s writing by transferring ownership. I might write the words, but my employer becomes “the writer” through legal assignment. Editorial assistance operates on a continuum: the greater the influence of dictionaries, grammar checkers, or human editors on the final product, the more they should be credited.

Search engines provide access to enormous amounts of knowledge while making attribution relatively straightforward. This makes it easy to build on others’ genius, and cite the sources to avoid plagiarism, while maintaining a clear distinction between your words and those of others.

But what’s a GPT in this context?

  • Is it a sophisticated grammar checker? Definitely.
  • Is it a mechanical editor? It can be.
  • Is it your personal writer-for-hire? It could be.
  • Is it a source of original content? Unclear.
  • Is it an industrial-strength plagiarism machine? That’s still a topic of heated discussion.

AI doesn’t fit into existing categories while somehow fitting into all of them. The academic press is still debating this. Stances range from “do what makes sense” to “no way, no how” depending on the field and editorial board. Most of my academic papers were guided by ACM and IEEE standards, so the ACM’s more flexible approach feels familiar and reasonable while maintaining academic transparency.

The competence question

As a writer, the integrity question centers on authorship: “Am I still the writer if AI helps structure my arguments?” As an instructor, it’s about learning: “Do my students understand the material if they’re using AI to generate responses?”

Continue reading “Finding the line between AI assistance and AI dependence”

How to get honest answers from AI tools (hint: better questions)

Google Cloud recently published “Smarter Authoring, Better Code: How AI is Reshaping Google Cloud’s Developer Experience,” describing how they’re applying AI to documentation challenges that every technical writing team recognizes: keeping content accurate, current, and useful for developers working with constantly evolving services.

Because AI tools consistently claim to excel at content analysis and summarization, I put that claim to the test on this article. I’ve been working with these tools for about six months now and wanted to test what I’d learned. So, I fed the Google article to Claude and spent some time exploring what it could tell me about the content, the claims, and the gaps.

Spoiler alert: the questions I asked mattered more than the answers I got.

The thought leadership gap: Style over substance

The Google article describes two main AI applications in technical writing:

  • Integrating Gemini into writers’ authoring environments for productivity tasks like generating tables and applying style guides, and
  • An automated testing system that uses Gemini to read documentation steps and generate Playwright scripts for validation.

The article also describes a multi-agent system for code sample generation that uses Protobuf definitions as source of truth, with generator agents creating samples and evaluator agents scoring them against rubrics.

Claude initially provided a summary that was cleaner and better organized than my mental notes after reading the same article. However, when I pushed for evidence supporting the claims, Claude articulated what I’d been sensing: the article provides limited concrete evidence beyond “over 100 tests daily” and references to “tens of thousands of idiomatic samples.” There was no mention of metrics on accuracy improvements, time savings, before/after comparisons, or validation of claims about preventing “factual drift.” You can see the whole conversation in the chat transcript.

When I questioned Claude about why Google would publish such a detail-light article, Claude offered a perspective that troubled me:

For thought leadership aimed at developer relations leaders and engineering executives, too much evidence could backfire. Publishing specific metrics creates comparison benchmarks that competitors can target. On the other hand, a claim like, “Over 100 tests daily” suggests impressive scale without revealing whether those tests actually pass.

That bothered me. We’re talking about a business communication norm where impressive-sounding claims matter more than verifiable results. Claude put it bluntly: “There’s an ethical gap between presenting a technical approach and implying proven success.”

Continue reading “How to get honest answers from AI tools (hint: better questions)”

Tech writing: dealing with changes for centuries

15th-century technical writers in library

The panic is familiar. New technology arrives, threatens to automate away our jobs, and suddenly everyone’s scrambling to figure out what skills will matter in five years. Sound like the current AI conversation in technical writing circles.

Rahel Anne Bailie posted a summary of changes in technologies that technical writers have dealt with over the past few decades. But technical writers have been navigating this exact disruption for centuries.

Think about it. In 500 years, anyone reading the words of technical writers today will wonder what the obsolete vocabulary means and what marvels a document titled “Installing Ubuntu” might hold. Who will remember Ubuntu in 500 years?

Now flip it around. Look at documents from alchemists 500 or 1,000 years ago and think about who wrote them. Some were written by subject matter experts, others by professional writers skilled at applying the tools of their day to record the knowledge of their day.

The pattern repeats: media evolves from stone tablets and chisels to quill pens and parchment, to movable type and printing presses, to desktop publishing and websites. The tools change.

What actually stays constant

Over the centuries, tech writers have been learning the technology they are documenting (alchemy, radar, APIs, what have you) and writing to specific audiences (wizards, technicians, software developers, and so on). Names change, but the guiding principles have changed very little.

What remains important to remember, and communicate, is the value that the scribes and writers bring to the customer experience.

Why the AI panic misses the point

AI represents another tool shift, not a fundamental change in what technical writers do. While the introduction of AI into the field came on a bit like a bull in a China shop, with the initial message along the lines of “Outta my way! AI is here to save the day!” Now that the dust from that storm has settled, we can see that the real question isn’t whether AI will replace technical writers—it’s which technical writers will adapt their skills to work effectively with AI, just as previous generations learned to work with desktop publishing, content management systems, and web technologies.

Continue reading “Tech writing: dealing with changes for centuries”

My AI has a sense of snark

This weekend, I woke my LinkedIn profile from its hibernation and spent some time bringing it up to date and dusting off the cobwebs. The good news is that it gave me something to blog about!

I sent my profile into hibernation last fall for a variety of reasons. The main reason was just to come up for air as I found the constant barrage of “inspirational accomplishments” and “incredible journeys” overwhelming. Sprinkled with incredibly sad stories of people having to sell their houses after months of unsuccessful job hunting, made it just too much of an emotional roller coaster ride.

No offense to anyone who’s managed to accomplish anything during these difficult times. And my heart goes out to everyone who needs a job and is still struggling to find one. But I needed to step away from the notifications for a while.

Well, times up! I’m back online and I’m here to write this, let’s call it a, celebratory post.

Back online

To kick things off, I’d need some content for LinkedIn (as is the nature of the site), so I asked Claude (AI) to write a blog post about the style of LinkedIn posts in the style of a LinkedIn post. This is a task for AI because it’s not natural for me to write in, or about, the genre.

Unfortunately, I might have confused it. The AI was working in the context of my blogging-style prompt and you can see how it tries to follow my blog-post style while analyzing LinkedIn accomplishment theater, and while trying to make the post itself LinkedIn-worthy.

What follows is the AI-generated post that I wished I could have written.

I just published a blog post, and you won’t believe what happened next

Yesterday, I sat down to write a simple blog post about content strategy. What started as 800 words of practical advice somehow transformed into a LinkedIn-worthy tale of professional triumph that would make a motivational speaker weep with pride.

Here’s how it happened (and why we should probably talk about it).

Continue reading “My AI has a sense of snark”

Teaching technical writing in an AI world

A lot has changed in tech writing during the past two years when it comes to LLM tools (a.k.a. AI). That time frame coincides with my tenure teaching API documentation and watching how my students adapt to these tools has given me some insight into how our profession is evolving.

More to the point, it’s forced me to develop a systematic approach to AI integration for my next course.

The challenge: Teaching moving targets

When I started teaching API documentation in spring 2024, LLM tools felt like the “Apple II” stage of PC evolution. Interesting, but not quite ready for serious work. My students were “cautiously skeptical,” and treated AI as a curiosity rather than a necessity.

Some students used LLM tools to help create rough drafts while others wanted to avoid the AI tools to get a more hands-on experience.

That changed rapidly. By the third course, students weren’t asking whether to use AI, they were asking how to use it effectively. The industry had moved very quickly, and my students needed practical frameworks, not philosophical debates, to confront this new reality.

What I learned from watching students evolve

Rather than ban AI tools, I decided to lean into them and watch what happened. I asked students to describe if, and how, they used AI tools in their assignments. This gave them a record for their portfolio presentations, but it also created an informal longitudinal study of AI application in technical writing education. Here’s a summary of what I observed:

Continue reading “Teaching technical writing in an AI world”

When AI fixes everything (except what matters)

Imagine that you’ve deployed AI to generate your technical documentation. The tool promised to revolutionize your content workflow, and honestly, it delivered on speed. What used to take days now happens in minutes.

Now, fast-forward six months to find customer support is drowning in confused user tickets. Social media mentions of your product are increasingly sarcastic. Sales is asking pointed questions about why adoption rates are dropping, and nobody can figure out what changed. The product is as solid as ever.

In this post, I want to provide a more optimistic outcome and follow up on a recent post that ended on a scenario that could lead to such a story.

The invisible problem

When you don’t have reliable documentation analytics, problems announce themselves through every channel except the actual source. Without reliable analytics, your first clue that AI is producing unhelpful developer docs won’t be a dashboard alert. It’ll be angry developers posting screenshots of your broken code examples on social media.

Remember, automating your processes accelerates them in whatever direction they were already heading. If your current documentation process and performance are unmeasured and reactive, you won’t know if AI is helping you out or not, until long after the damage has been done.

It just keeps producing content that checks all the boxes while serving no one.

A different path forward

The dystopian scenario above isn’t inevitable. But it requires resisting the “deploy AI everywhere immediately” impulse and taking a more methodical approach.

Continue reading “When AI fixes everything (except what matters)”

Do-it-yourself (with a friend) portfolio

Everyone tells aspiring tech writers to find an open-source software (OSS) project to create content for a portfolio. Unfortunately, while popular advice, I haven’t heard that to be very successful.

Rather than leave you hunting for those elusive OSS opportunities, I’ll describe how the portfolio project I use in my API documentation course works. This approach might be more accessible and achievable for building the portfolio content that you need.

The course portfolio project works because it addresses two critical elements often missing from solo projects: collaboration and accountability. Both are essential skills for technical writers, and both significantly improve your chances of producing something portfolio-worthy.

How to run your own portfolio project

Here’s how our portfolio projects work, as adapted for self-directed learning:

Continue reading “Do-it-yourself (with a friend) portfolio”

Is your documentation ready for AI?

AI just explained my own research back to me. I was surprised by how it showed me the message I meant to say 10 years ago, but seemed to lose in the academic genre of the original articles.

I’m working on how to teach AI tools in my API documentation course. As part of my research, I thought I’d feed some of my technical writing articles from about 10 years ago into an AI tool, along with some contemporary work from others. I asked the AI to compare, contrast, and summarize them from various angles.

The results were interesting enough that I had to write about them here.

When AI becomes your editor

One summary took an analysis I’d buried in academic prose and flipped it into something useful. It linked the different documentation types commonly found in API Documentation to what readers are trying to accomplish:

Original version:

In Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites (2015), we proposed this list of goals that readers could hope to accomplish in an informational site (e.g. documentation). Examples of each goal were presented later in the 12-page article.

  • Reading to be reminded (Reading to do lite)
  • Reading to accomplish a task in a website (Reading to do here)
  • Reading to accomplish a task outside a website now (Reading to do now)
  • Reading to accomplish a task outside a website later (Reading to learn to do later)
  • Reading to learn (Reading to learn to use later or to apply with other information)

AI’s translation:

  • Recipes and examples for “reading to do now”
  • Topical guides for “reading to learn”
  • Reference guides for “reading to be reminded”
  • Support forums for edge cases and community help
  • Marketing pages for pre-usage evaluation

Then it got right to the point that I’d been dancing around for paragraphs:

Yet we typically measure them all the same way. Page views, time on page, bounce rate. That’s like using a thermometer to measure blood pressure. The tool works fine; you’re just measuring the wrong thing.

Ouch. But also: exactly.

The AI summary went on to suggest what matters for each content type:

Continue reading “Is your documentation ready for AI?”

Looking back to look ahead

It’s been a while since I’ve contributed to my blog. The truth be told, I’ve been busy.

The day after my birthday, 2023, was when I found out that, along with 12,000 of my now former coworkers, I no longer had a job with Google.

I remember the feeling when I got the news (by email, of course). I felt like when I was learning to fly, and the instructor cut off the engine mid-flight. One minute you’re looking out the window, checking off the waypoints to your destination. The next, you’re looking for where you’re going to land. Because you weren’t going to be flying for much longer.

For the non-pilots reading this, most light airplanes keep flying after an engine failure. They just don’t stay at the same altitude for very long.

Learning to fly taught me that in those situations, the longer you ignore the reality of the situation, the shorter your list of options becomes. So, if they’re laying off thousands of people in my industry and, as we’d see in the months that followed, this would be just the beginning, I concluded that this was the end of my professional career as I had come to know it.

I was now gliding.

However, that’s a story for another post (or two).

A soft landing

The good news is (as it is for the pilots who are prepared for the possibility of an engine failure), my wife and I have landed safely and we’re doing fine.

My post-career landing was softened by an opportunity to teach an API documentation course at the University of Washington’s Professional and Continuing Education school. I just wrapped up the third term, last week, and it’s been a lot of fun.

However, the past two years have brought seismic shifts to technical writing, particularly in API documentation. Large language model tools have reshaped how we approach documentation creation, analysis, and maintenance. As practitioners, we’re all grappling with the same fundamental questions:

  • How do we adapt our established practices?
  • What assumptions about our craft need revisiting?

Enter AI and the curriculum challenge

Large language model (LLM) tools have taken the world by storm in the past two years. API documentation hasn’t been immune to their influence. As such, I’ve been working on an update to my API documentation course to integrate AI technologies to keep the curriculum current.

The challenge isn’t just adding AI tools to the syllabus. It’s understanding how these tools change the fundamental nature of documentation work. What skills remain essential? What new competencies do we need to develop? How do we teach both the power and limitations of AI-assisted documentation?

As I update the API documentation course, I’ve been putting different AI tools to the test, with some rather interesting results.

Continue reading “Looking back to look ahead”