AI reflects on my reflection

After publishing my last post, I asked Claude what it thought about it, using the “reflective blog post” prompt I’ve been developing. It appears that we have some more work to do, indeed.

Claude reviewed my post in terms of:

  • Authenticity
  • Professional stakes and relevance
  • Voice and intellectual honesty
  • Process documentation quality
  • Factual accuracy
  • Overall assessment

Basically, I got a report card from my mechanical English teacher. Let’s see how I fared…

Authenticity

Evaluates whether the uncertainty and professional challenges feel genuine rather than manufactured for content purposes.

Let’s see what my strengths in Authenticity are, according to Claude:

  • Genuine uncertainty: The author demonstrates real ambivalence about AI collaboration without rushing toward resolution (“I’m not ready to abandon my AI-overlord, er, I mean co-writer, just yet”)
  • Honest stakes: Clear professional and personal impact – less satisfaction, feeling like work, questioning writing identity
  • Specific observations: Concrete details about the back-and-forth editing process, watching drafts transform, losing sense of accomplishment
  • Temporal authenticity: Uses present-tense investigation language (“What I think this really means…”) and frames conclusions as provisional

OK. So far, so good. What does Claude think I need to work on?

  • Limited process transparency: Doesn’t show systematic methodology for investigating this challenge beyond informal reflection
  • Missing specifics: Could benefit from more concrete examples of the “soul removal” process or specific editing exchanges

Perhaps. However, I’m not convinced those are appropriate in a reflection paper. It could be that we have a difference of opinion on what a reflection paper should include. Or, perhaps, adding more of this information would make it stronger. In retrospect, I think I’m still figuring out those aspects of my writing, so I’m still not quite sure what to say about them.

I’ll give Claude credit for identifying some inner workings that I need to process.

Professional stakes and relevance

Examines how clearly the article demonstrates real workflow impact and broader applicability to others in the field.

Continue reading “AI reflects on my reflection”

Reflections on co-writing with AI

(Full disclosure, unlike recent blog posts, this post was not reviewed or edited by an AI before it was published. What you see is what you get from the real, organic, me.)

After the past few articles, I thought I’d pause to reflect for a moment on what I thought about my recent co-writing with AI experiences.

Less soul

First up, it’s been a bit weird to watch my first drafts transform into the finished blog posts. It has seemed like work, for starters. Running my copy through an editor, revising, discussing revisions, more revising, and ultimately publishing (or not in a couple of cases). Sounds like a job to me, as compared to just me, myself, and I working through a few iterations before publishing.

As I review the interim versions, I see myself slipping away, then I re-insert myself, then the AI applies more style guide, and then I take that back, etc. What I think this really means is that I need to work on the guiding prompt to reduce some of this back-and-forth.

More organization

As it removes my soul, I think the resulting content is more organized and, perhaps, easier to read at the end of the process. Or rather it seems to have less of MY soul. Despite crafting an AI prompt to produce my “style.” Does this mean that my natural writing is disorganized?

Perhaps.

I prefer to think of myself being “differently organized.” I think my natural, blog-post writing has more of a “stream-of-consciousness” feel, although I try to keep it understandable–and I rarely publish a first draft. It seems like changing the organization to suit the audience removes the “authentically human” nature of the non-AI articles. Again, perhaps another clue to do some maintenance on the prompts.

Less satisfying

I enjoy sharing my experiences by way of this blog, but the “feels like work” aspect sucks some of that fun out of the process. I no longer have the same sense of accomplishment when I press Publish like I recall having in the past. It’s more of a sense of relief that I’m finally finished with the damned post!

More learning

I feel as though having the robot editor review and edit the blog drafts gives me an external perspective on my writing. Having an editor who’s reviewed umpty-bazillion documents before mine can be a bit intimidating, yet instructive at the same time. In that sense it’s very much like the working with the best editors I’ve had in the past. Both I and my writing come out better for having the experience.

It’s comes down to trust

I think most of my feelings of unease stem from the fact that I don’t trust my AI editor at some level. Its confident evaluation the I did “X” which evaporates the instant I said, I did “Y”, to which it agrees just as enthusiastically is not confidence inspiring. This “No two answers are the same” property of LLM-GPTs seems like it should be a showstopper right out of the gate (a bias that likely comes from my decades of experience with the old, predictable computers and their crazy, deterministic algorithms). While, that’s a property that can be managed, it still leaves me a bit anxious.

I’m not ready to abandon my AI-overlord, er, I mean co-writer, just yet. We’ll keep working together to come to some level of agreement and, maybe even a level of trust.

But if I’m honest; the idea of having AI agents running un-checked is still nightmare fodder for me. Maybe that’s my next nightmare I need to confront and overcome?

Testing antagonistic AI review on my own writing

I’m terrible at spotting flaws in my own work. As I iterate through multiple drafts and careful editing, I still tend to lose myself in the details and focus more on the trees than the forest. It’s not uncommon for me to miss weak arguments, unclear explanations, and assumptions that need defending—until after I publish, of course.

While preparing my previous post about AI writing workflows, I decided to test something: asking Claude to actively look for problems with my draft from an antagonistic perspective. Not just gentle feedback, but the kind of opposition you’d face from readers with strong opposing viewpoints.

The results were more brutal than I’ve been used to from Claude, yet they were more useful, than I expected. Here’s what I learned by using adversarial review to stress-test arguments, and why this technique shows promise despite some significant limitations.

The experiment setup

Close to publishing, I gave Claude this prompt:

Review the most recent draft as an antagonistic reader who would like to find any and all problems with the article (whether constructive or just to be antagonistic). This reader might have a pro-AI agenda to push or an anti-AI (as in no AI can be good) agenda.

I was looking for the kind of opposition that finds every possible angle of attack—the readers who come to your work already convinced you’re wrong and looking for ammunition to prove it.

Claude responded by creating and applying three distinct antagonistic personas:

  • The ideological opponent (challenges core assumptions and claims bias)
  • The pedantic critic (nitpicks definitions and demands citations)
  • The bad faith reader (misrepresents positions for rhetorical advantage)

Then it proceeded to tear into my article from both pro-AI and anti-AI perspectives. I had asked for this, and the responses were deliciously brutal. They were antagonistic in exactly the way I’d requested, and to be honest, genuinely useful.

What different types of opposition reveal

Claude organized the feedback by its perspective on AI. What follows is a summary of the feedback. You can see all the feedback in the transcript of the chat.

Continue reading “Testing antagonistic AI review on my own writing”

Finding the line between AI assistance and AI dependence

After six months of diving into AI tools, I’m still figuring out how to work with them without compromising my professional integrity. The old boundaries between my words and borrowed words don’t map cleanly onto AI assistance. When AI helps craft my prose, am I still the author? When my students use GPTs (a generative, pre-trained transformer, built on a large-language model, or LLM and more commonly known as an AI chat tool) to generate sophisticated responses, how do I know if they actually understand the material? The underlying tension in both contexts is the same: Where does human skill end and tool dependency begin?

I’ve been wrestling with this question through direct experience, using AI tools in my own writing and as I prepare to teach students about AI applications in technical writing scenarios. The uncertainty isn’t comfortable, but it’s productive. It’s forcing a clarity about professional standards that I previously took for granted.

The attribution gap

Traditional models for crediting intellectual work assume clear human sources. Plagiarism involves stealing and passing off ideas or words as one’s own without crediting the source. That creates a property line between what’s “my work” and what isn’t. The property line is fuzzy, but it’s recognized.

Work-for-hire contracts handle using another’s writing by transferring ownership. I might write the words, but my employer becomes “the writer” through legal assignment. Editorial assistance operates on a continuum: the greater the influence of dictionaries, grammar checkers, or human editors on the final product, the more they should be credited.

Search engines provide access to enormous amounts of knowledge while making attribution relatively straightforward. This makes it easy to build on others’ genius, and cite the sources to avoid plagiarism, while maintaining a clear distinction between your words and those of others.

But what’s a GPT in this context?

  • Is it a sophisticated grammar checker? Definitely.
  • Is it a mechanical editor? It can be.
  • Is it your personal writer-for-hire? It could be.
  • Is it a source of original content? Unclear.
  • Is it an industrial-strength plagiarism machine? That’s still a topic of heated discussion.

AI doesn’t fit into existing categories while somehow fitting into all of them. The academic press is still debating this. Stances range from “do what makes sense” to “no way, no how” depending on the field and editorial board. Most of my academic papers were guided by ACM and IEEE standards, so the ACM’s more flexible approach feels familiar and reasonable while maintaining academic transparency.

The competence question

As a writer, the integrity question centers on authorship: “Am I still the writer if AI helps structure my arguments?” As an instructor, it’s about learning: “Do my students understand the material if they’re using AI to generate responses?”

Continue reading “Finding the line between AI assistance and AI dependence”

How to get honest answers from AI tools (hint: better questions)

Google Cloud recently published “Smarter Authoring, Better Code: How AI is Reshaping Google Cloud’s Developer Experience,” describing how they’re applying AI to documentation challenges that every technical writing team recognizes: keeping content accurate, current, and useful for developers working with constantly evolving services.

Because AI tools consistently claim to excel at content analysis and summarization, I put that claim to the test on this article. I’ve been working with these tools for about six months now and wanted to test what I’d learned. So, I fed the Google article to Claude and spent some time exploring what it could tell me about the content, the claims, and the gaps.

Spoiler alert: the questions I asked mattered more than the answers I got.

The thought leadership gap: Style over substance

The Google article describes two main AI applications in technical writing:

  • Integrating Gemini into writers’ authoring environments for productivity tasks like generating tables and applying style guides, and
  • An automated testing system that uses Gemini to read documentation steps and generate Playwright scripts for validation.

The article also describes a multi-agent system for code sample generation that uses Protobuf definitions as source of truth, with generator agents creating samples and evaluator agents scoring them against rubrics.

Claude initially provided a summary that was cleaner and better organized than my mental notes after reading the same article. However, when I pushed for evidence supporting the claims, Claude articulated what I’d been sensing: the article provides limited concrete evidence beyond “over 100 tests daily” and references to “tens of thousands of idiomatic samples.” There was no mention of metrics on accuracy improvements, time savings, before/after comparisons, or validation of claims about preventing “factual drift.” You can see the whole conversation in the chat transcript.

When I questioned Claude about why Google would publish such a detail-light article, Claude offered a perspective that troubled me:

For thought leadership aimed at developer relations leaders and engineering executives, too much evidence could backfire. Publishing specific metrics creates comparison benchmarks that competitors can target. On the other hand, a claim like, “Over 100 tests daily” suggests impressive scale without revealing whether those tests actually pass.

That bothered me. We’re talking about a business communication norm where impressive-sounding claims matter more than verifiable results. Claude put it bluntly: “There’s an ethical gap between presenting a technical approach and implying proven success.”

Continue reading “How to get honest answers from AI tools (hint: better questions)”

Tech writing: dealing with changes for centuries

15th-century technical writers in library

The panic is familiar. New technology arrives, threatens to automate away our jobs, and suddenly everyone’s scrambling to figure out what skills will matter in five years. Sound like the current AI conversation in technical writing circles.

Rahel Anne Bailie posted a summary of changes in technologies that technical writers have dealt with over the past few decades. But technical writers have been navigating this exact disruption for centuries.

Think about it. In 500 years, anyone reading the words of technical writers today will wonder what the obsolete vocabulary means and what marvels a document titled “Installing Ubuntu” might hold. Who will remember Ubuntu in 500 years?

Now flip it around. Look at documents from alchemists 500 or 1,000 years ago and think about who wrote them. Some were written by subject matter experts, others by professional writers skilled at applying the tools of their day to record the knowledge of their day.

The pattern repeats: media evolves from stone tablets and chisels to quill pens and parchment, to movable type and printing presses, to desktop publishing and websites. The tools change.

What actually stays constant

Over the centuries, tech writers have been learning the technology they are documenting (alchemy, radar, APIs, what have you) and writing to specific audiences (wizards, technicians, software developers, and so on). Names change, but the guiding principles have changed very little.

What remains important to remember, and communicate, is the value that the scribes and writers bring to the customer experience.

Why the AI panic misses the point

AI represents another tool shift, not a fundamental change in what technical writers do. While the introduction of AI into the field came on a bit like a bull in a China shop, with the initial message along the lines of “Outta my way! AI is here to save the day!” Now that the dust from that storm has settled, we can see that the real question isn’t whether AI will replace technical writers—it’s which technical writers will adapt their skills to work effectively with AI, just as previous generations learned to work with desktop publishing, content management systems, and web technologies.

Continue reading “Tech writing: dealing with changes for centuries”

My AI has a sense of snark

This weekend, I woke my LinkedIn profile from its hibernation and spent some time bringing it up to date and dusting off the cobwebs. The good news is that it gave me something to blog about!

I sent my profile into hibernation last fall for a variety of reasons. The main reason was just to come up for air as I found the constant barrage of “inspirational accomplishments” and “incredible journeys” overwhelming. Sprinkled with incredibly sad stories of people having to sell their houses after months of unsuccessful job hunting, made it just too much of an emotional roller coaster ride.

No offense to anyone who’s managed to accomplish anything during these difficult times. And my heart goes out to everyone who needs a job and is still struggling to find one. But I needed to step away from the notifications for a while.

Well, times up! I’m back online and I’m here to write this, let’s call it a, celebratory post.

Back online

To kick things off, I’d need some content for LinkedIn (as is the nature of the site), so I asked Claude (AI) to write a blog post about the style of LinkedIn posts in the style of a LinkedIn post. This is a task for AI because it’s not natural for me to write in, or about, the genre.

Unfortunately, I might have confused it. The AI was working in the context of my blogging-style prompt and you can see how it tries to follow my blog-post style while analyzing LinkedIn accomplishment theater, and while trying to make the post itself LinkedIn-worthy.

What follows is the AI-generated post that I wished I could have written.

I just published a blog post, and you won’t believe what happened next

Yesterday, I sat down to write a simple blog post about content strategy. What started as 800 words of practical advice somehow transformed into a LinkedIn-worthy tale of professional triumph that would make a motivational speaker weep with pride.

Here’s how it happened (and why we should probably talk about it).

Continue reading “My AI has a sense of snark”

Teaching technical writing in an AI world

A lot has changed in tech writing during the past two years when it comes to LLM tools (a.k.a. AI). That time frame coincides with my tenure teaching API documentation and watching how my students adapt to these tools has given me some insight into how our profession is evolving.

More to the point, it’s forced me to develop a systematic approach to AI integration for my next course.

The challenge: Teaching moving targets

When I started teaching API documentation in spring 2024, LLM tools felt like the “Apple II” stage of PC evolution. Interesting, but not quite ready for serious work. My students were “cautiously skeptical,” and treated AI as a curiosity rather than a necessity.

Some students used LLM tools to help create rough drafts while others wanted to avoid the AI tools to get a more hands-on experience.

That changed rapidly. By the third course, students weren’t asking whether to use AI, they were asking how to use it effectively. The industry had moved very quickly, and my students needed practical frameworks, not philosophical debates, to confront this new reality.

What I learned from watching students evolve

Rather than ban AI tools, I decided to lean into them and watch what happened. I asked students to describe if, and how, they used AI tools in their assignments. This gave them a record for their portfolio presentations, but it also created an informal longitudinal study of AI application in technical writing education. Here’s a summary of what I observed:

Continue reading “Teaching technical writing in an AI world”

When AI fixes everything (except what matters)

Imagine that you’ve deployed AI to generate your technical documentation. The tool promised to revolutionize your content workflow, and honestly, it delivered on speed. What used to take days now happens in minutes.

Now, fast-forward six months to find customer support is drowning in confused user tickets. Social media mentions of your product are increasingly sarcastic. Sales is asking pointed questions about why adoption rates are dropping, and nobody can figure out what changed. The product is as solid as ever.

In this post, I want to provide a more optimistic outcome and follow up on a recent post that ended on a scenario that could lead to such a story.

The invisible problem

When you don’t have reliable documentation analytics, problems announce themselves through every channel except the actual source. Without reliable analytics, your first clue that AI is producing unhelpful developer docs won’t be a dashboard alert. It’ll be angry developers posting screenshots of your broken code examples on social media.

Remember, automating your processes accelerates them in whatever direction they were already heading. If your current documentation process and performance are unmeasured and reactive, you won’t know if AI is helping you out or not, until long after the damage has been done.

It just keeps producing content that checks all the boxes while serving no one.

A different path forward

The dystopian scenario above isn’t inevitable. But it requires resisting the “deploy AI everywhere immediately” impulse and taking a more methodical approach.

Continue reading “When AI fixes everything (except what matters)”

Do-it-yourself (with a friend) portfolio

Everyone tells aspiring tech writers to find an open-source software (OSS) project to create content for a portfolio. Unfortunately, while popular advice, I haven’t heard that to be very successful.

Rather than leave you hunting for those elusive OSS opportunities, I’ll describe how the portfolio project I use in my API documentation course works. This approach might be more accessible and achievable for building the portfolio content that you need.

The course portfolio project works because it addresses two critical elements often missing from solo projects: collaboration and accountability. Both are essential skills for technical writers, and both significantly improve your chances of producing something portfolio-worthy.

How to run your own portfolio project

Here’s how our portfolio projects work, as adapted for self-directed learning:

Continue reading “Do-it-yourself (with a friend) portfolio”