I’m not an AI hater. Really!

After a particularly tiring vibe coding session with Claude, I shared some of the resulting grumpiness in a few posts to the Write the Docs slack. I’ll confess. I’m probably not AI’s (as the term is currently bandied about) biggest fan. But I’m not a hater. I’m just, well, disappointed with it. It’s just not living up to its purported potential (a.k.a. hype, these days).

I’d been writing code (i.e. prompting Claude to write code for me, a.k.a. vibe coding) and Claude’s code was getting buggier and buggier. I’ve seen that happen before after it’s written a lot of code. It acts like it’s tired, but I think it’s due to having too many things to keep track of in the conversation, so it loses its place (I’m not an AI psychologist…yet).

In any case, I was beginning to wonder if it would have been faster to just start typing my own code (it wouldn’t), but I wanted to see how it played out. Eventually, after Claude had gotten stuck, again, I was troubleshooting in parallel and suggested a fix. Lo and behold, Claude agreed (as always). With that experience still fresh in my head, I went to the Write the Docs slack to see how others were faring in their AI journeys.

Thinking of past bubbles

In one post, I compared current AI hype to the hype I recall when PCs (as in IBM PC) came out in the early 80s. They promised the moon and in microscopically fine print, mentioned that “some assembly was required.” Sometimes some C, as well. (If you know, you know.)

In the 80s, it’s not that PCs weren’t amazing pieces of technology that could fit on your desk and still have room for your phone and desk blotter. Remember, this was a time when computer hardware had to have its own air-conditioned office. It’s that they lacked the “killer app” (the application that solves a high-value problem for a large audience), until Lotus 1-2-3 & Multiplan, two of the first spreadsheet apps), came out and ran on the PC.

Those killer apps transformed the PC from a geeky novelty to an absolute necessity. They enabled regular people to see the value that these machines could provide. Fast-forward to today, that’s how AI seems to be positioned: A novel (to most non-AI researchers and developers) technology waiting for its “killer app.” Just like the PC could do anything, but nobody cared until it did something useful. It’s hard to tell what AI’s killer app will be until we see it. If I knew what it was, I wouldn’t be here writing another blog post; I’d be working on AI’s killer app!

Continue reading “I’m not an AI hater. Really!”

Finding the line between AI assistance and AI dependence

After six months of diving into AI tools, I’m still figuring out how to work with them without compromising my professional integrity. The old boundaries between my words and borrowed words don’t map cleanly onto AI assistance. When AI helps craft my prose, am I still the author? When my students use GPTs (a generative, pre-trained transformer, built on a large-language model, or LLM and more commonly known as an AI chat tool) to generate sophisticated responses, how do I know if they actually understand the material? The underlying tension in both contexts is the same: Where does human skill end and tool dependency begin?

I’ve been wrestling with this question through direct experience, using AI tools in my own writing and as I prepare to teach students about AI applications in technical writing scenarios. The uncertainty isn’t comfortable, but it’s productive. It’s forcing a clarity about professional standards that I previously took for granted.

The attribution gap

Traditional models for crediting intellectual work assume clear human sources. Plagiarism involves stealing and passing off ideas or words as one’s own without crediting the source. That creates a property line between what’s “my work” and what isn’t. The property line is fuzzy, but it’s recognized.

Work-for-hire contracts handle using another’s writing by transferring ownership. I might write the words, but my employer becomes “the writer” through legal assignment. Editorial assistance operates on a continuum: the greater the influence of dictionaries, grammar checkers, or human editors on the final product, the more they should be credited.

Search engines provide access to enormous amounts of knowledge while making attribution relatively straightforward. This makes it easy to build on others’ genius, and cite the sources to avoid plagiarism, while maintaining a clear distinction between your words and those of others.

But what’s a GPT in this context?

  • Is it a sophisticated grammar checker? Definitely.
  • Is it a mechanical editor? It can be.
  • Is it your personal writer-for-hire? It could be.
  • Is it a source of original content? Unclear.
  • Is it an industrial-strength plagiarism machine? That’s still a topic of heated discussion.

AI doesn’t fit into existing categories while somehow fitting into all of them. The academic press is still debating this. Stances range from “do what makes sense” to “no way, no how” depending on the field and editorial board. Most of my academic papers were guided by ACM and IEEE standards, so the ACM’s more flexible approach feels familiar and reasonable while maintaining academic transparency.

The competence question

As a writer, the integrity question centers on authorship: “Am I still the writer if AI helps structure my arguments?” As an instructor, it’s about learning: “Do my students understand the material if they’re using AI to generate responses?”

Continue reading “Finding the line between AI assistance and AI dependence”

Tech writing: dealing with changes for centuries

15th-century technical writers in library

The panic is familiar. New technology arrives, threatens to automate away our jobs, and suddenly everyone’s scrambling to figure out what skills will matter in five years. Sound like the current AI conversation in technical writing circles.

Rahel Anne Bailie posted a summary of changes in technologies that technical writers have dealt with over the past few decades. But technical writers have been navigating this exact disruption for centuries.

Think about it. In 500 years, anyone reading the words of technical writers today will wonder what the obsolete vocabulary means and what marvels a document titled “Installing Ubuntu” might hold. Who will remember Ubuntu in 500 years?

Now flip it around. Look at documents from alchemists 500 or 1,000 years ago and think about who wrote them. Some were written by subject matter experts, others by professional writers skilled at applying the tools of their day to record the knowledge of their day.

The pattern repeats: media evolves from stone tablets and chisels to quill pens and parchment, to movable type and printing presses, to desktop publishing and websites. The tools change.

What actually stays constant

Over the centuries, tech writers have been learning the technology they are documenting (alchemy, radar, APIs, what have you) and writing to specific audiences (wizards, technicians, software developers, and so on). Names change, but the guiding principles have changed very little.

What remains important to remember, and communicate, is the value that the scribes and writers bring to the customer experience.

Why the AI panic misses the point

AI represents another tool shift, not a fundamental change in what technical writers do. While the introduction of AI into the field came on a bit like a bull in a China shop, with the initial message along the lines of “Outta my way! AI is here to save the day!” Now that the dust from that storm has settled, we can see that the real question isn’t whether AI will replace technical writers—it’s which technical writers will adapt their skills to work effectively with AI, just as previous generations learned to work with desktop publishing, content management systems, and web technologies.

Continue reading “Tech writing: dealing with changes for centuries”

Looking back to look ahead

It’s been a while since I’ve contributed to my blog. The truth be told, I’ve been busy.

The day after my birthday, 2023, was when I found out that, along with 12,000 of my now former coworkers, I no longer had a job with Google.

I remember the feeling when I got the news (by email, of course). I felt like when I was learning to fly, and the instructor cut off the engine mid-flight. One minute you’re looking out the window, checking off the waypoints to your destination. The next, you’re looking for where you’re going to land. Because you weren’t going to be flying for much longer.

For the non-pilots reading this, most light airplanes keep flying after an engine failure. They just don’t stay at the same altitude for very long.

Learning to fly taught me that in those situations, the longer you ignore the reality of the situation, the shorter your list of options becomes. So, if they’re laying off thousands of people in my industry and, as we’d see in the months that followed, this would be just the beginning, I concluded that this was the end of my professional career as I had come to know it.

I was now gliding.

However, that’s a story for another post (or two).

A soft landing

The good news is (as it is for the pilots who are prepared for the possibility of an engine failure), my wife and I have landed safely and we’re doing fine.

My post-career landing was softened by an opportunity to teach an API documentation course at the University of Washington’s Professional and Continuing Education school. I just wrapped up the third term, last week, and it’s been a lot of fun.

However, the past two years have brought seismic shifts to technical writing, particularly in API documentation. Large language model tools have reshaped how we approach documentation creation, analysis, and maintenance. As practitioners, we’re all grappling with the same fundamental questions:

  • How do we adapt our established practices?
  • What assumptions about our craft need revisiting?

Enter AI and the curriculum challenge

Large language model (LLM) tools have taken the world by storm in the past two years. API documentation hasn’t been immune to their influence. As such, I’ve been working on an update to my API documentation course to integrate AI technologies to keep the curriculum current.

The challenge isn’t just adding AI tools to the syllabus. It’s understanding how these tools change the fundamental nature of documentation work. What skills remain essential? What new competencies do we need to develop? How do we teach both the power and limitations of AI-assisted documentation?

As I update the API documentation course, I’ve been putting different AI tools to the test, with some rather interesting results.

Continue reading “Looking back to look ahead”

A look at the past to see the future of technical writing

Out of pure coincidence, I stumbled across this blog post about Technical writing in 2049 by Fabrizio Ferri Benedetti as I reviewed some examples of my earliest technical writing. I thought this might be a good opportunity to reflect on the past to see into the future.

My oldest artifact of technical writing that I authored is a technical manual for a central patient monitoring system I built for a hospital in 1981. The oldest technical manual I could find in my library is an Air Force training manual from 1952. I’ve kept some other relics of technical writing, but most are still in boxes.

Fabrizio’s blog post ends with this line, “What do you think tech writing will look like in 2049? I’d love to hear your predictions!” With my historical artifacts in hand, I accept his challenge and offer this response.

While I can only imagine what I thought about the future in years past, I can use these artifacts as examples of where technical writing has been and where it might go. I can also use them to describe how hard it is to predict the effects of a technical disruption, except by saying, “The more things change, the more they stay the same.”

Tech comm 1999

25 years ago, we still mostly printed the tech docs in books, as we had done in the decades that preceded, although online documentation was clearly about to make its debut. For a short while, CDs replaced printed docs but soon after, tech docs were almost exclusively served online. Could I have imagined online docs in 1999? Probably, without too much imagination. After all, we already had AltaVista.

To look at the past, present, and the future of technical writing, I think it’s best to tease that apart into content, production, and use or audience.

I’ll leave out stakeholders, because I haven’t seen that change much since content went online and the business model for technical writing all but disappeared. That’s a conversation for another article.

Continue reading “A look at the past to see the future of technical writing”

Is there any more room for innovation in tech writing?

I was looking through some classic (i.e., old) examples of technical writing and noticed how the format of application programming interface (API) reference topics hasn’t really changed since these examples were published in 1988 and 1992.

Is this because there’s been no innovation in technical writing during the intervening 30-ish years or, perhaps, we’ve found something that works, so why change? Taking that a step further, if what worked 30 years ago still works, is there any more room for innovation in tech writing?

Here are a couple of examples that I found in my library of software documentation. (It’s a library of printed documentation so there’s not much in there from the 21st century.)

MS-DOS Encyclopedia (1988)

Reference topic from MS-DOC Encyclopedia

The first example of classic documentation is from the Microsoft MS-DOS Encyclopedia (Ray Duncan, Microsoft Press, 1988), a 1,570-page collection of everything you’d need to know about programming for MS-DOS (v3.2) in 1988.

It starts with how MS-DOS was originally developed, continues with conceptual overviews of the different operating system functions, how to create common applications and extensions to the operating system, and various reference topics, such as the interrupt example I included here. It’s a one-stop reference manual that would prepare any MS-DOS programmer or device-driver developer for successful coding experiences.

This 33-year-old encyclopedia presents information in a format that you can still see used today.

  • Overview
  • Conceptual content
  • How-to articles of common tasks
  • Reference topics on various aspects of the product
  • Cross references as make sense

The content in the example reference pages that I included also follows a format that is still seen today:

Continue reading “Is there any more room for innovation in tech writing?”

Reflections as I begin my third time on jury duty

Diagram of an API as a gear with connection points

Today I met with my co-jurors who’ll be judging this year’s DevPortal awards nominees with me in the coming weeks. The entrants in this year’s showcase represent an impressive effort on the part of the nominees, so we have our work cut out for us. This is my third year on the jury and I’m looking forward to this year’s entries.

What struck me as we kicked off this year’s evaluation today was the 15 different award categories this year–up from last year’s eight categories–and how the presentation of APIs to the world has changed over the years. What impresses me every year is the innovation applied to make this presentation effective and engaging.

Pronovix hosts this event and they’ve modeled this year’s 15 categories around the Maturity model for devportals, which describes these three dimensions of developer portal maturity.

  • Developer experience
  • Business alignment
  • Operational maturity

When I judged the entries last year, I approached it from a usability perspective–how easily could the customer do what they needed to do? From that perspective, the maturity model dimensions represent the usability of the site from the perspectives of different stakeholders involved with an API developer’s portal.

From the perspective of ease of use, developer experience represents how easy the site makes it for the reader to get value from the product. Operational maturity represents how easy it is for contributors to add value to the developer portal. Business alignment represents how well the site makes it easy for the organization to access value.

To be successful in today’s crowded API marketplace, a developer portal must serve all three of the stakeholders these dimensions represent. The maturity model dimensions reflect how APIs must do more than just provide access to a service.

Each year, the entrants in this competition get better and the competition gets even more difficult to judge. It’s clear that the entrants are taking notes and applying what they learn.

Filmmaking lessons that improved my technical writing

Bob sitting next to 16mm movie camera
Cinematographer Bob on location

Some time ago, I was a filmmaker. Honestly, I wasn’t especially good at it. I wasn’t bad, just OK. However, while I enjoyed the work, using my checkbook balance as a metric, I wasn’t good enough at it to make a living. Because of that, I’m a technical writer.

Filmmaking, however, taught me a lot about technical writing, so I thought I’d share a few of the lessons I learned.

A high-quality film is not the same as a good film

I’m sure you can recall a film that was awful. It could have had excellent lighting, exposure, audio, soundtrack, etc. but you still wonder how you’ll ever get back those 90 minutes of your life. There are many excellent technicians in the film industry who produce technically high-quality material. And yet, somehow all that high-quality material results in a film that is painful to watch endure.

What I learned was that ALL the elements of a film must work towards the goal of telling the story or the film doesn’t work. It’s surprisingly binary. Just being good in a few categories is rarely enough to carry a film.

Except for the story, being good in one aspect rarely makes up for being bad in another. I was a filmmaker before YouTube and home-made videos–around the time reality shows started becoming popular. The importance of story over technical quality (caveat: the audio must always be acceptable) was clear. Since then, what you see trending on YouTube should convince you that story is still king.

With technical docs, I see a lot of concern over “technical” quality, such as spelling, language, vocabulary, and bugs fixed. I’m not saying these elements aren’t important. But, if you’re not telling your audience what they want to know, how well it’s spelled isn’t going to matter. I was surprised to see a similar discount of technical quality in my dissertation study (see API reference topic study – summary results). The problem, unfortunately, is that it’s easier to count these technical qualities, so it’s deceptively attractive to equate technical quality with document quality or utility. Technical quality might be a factor in utility, but it’s not a proxy.

Don’t give the audience a reason to leave

A film must tell a story in a way that keeps the audience wanting to know what’s next. Whether the film is a 10-second commercial or a 90-minute feature, crafting a story in such a way is a skill that takes practice and a knowledge of your audience. It’s an aspect of the film that starts with the script and must be supported all the way through production, editing, and release. It’s one of those things that, if not done well, can result in one of the bad examples I referred to in the previous lesson.

Continue reading “Filmmaking lessons that improved my technical writing”

Charting the popularity of technical communication

I was exploring Google Trends to see what was happening with tech comm and related terms and found this.

As an added bonus, if you click on Google Trends at the bottom of each image. you can open up the query in the Google Trends site and play with the variables.

Tech Comm ain’t what it used to be

Technical communication and tech comm have been declining in popularity over time. The highest point on the chart is represented as 100 and the rest of the data being relative to that. This chart shows that, the terms dropped in popularity until about 2007, after which they remained relatively stable, except tech comm continuing a slow but steady decline. The joined term, techcomm is even less common, so I omitted it because it ranged from only 0 to 6 over this time period.


Technical communication has annual peaks of activity, which could correspond to TC-related conferences. (I don’t have a database with which to correlate them, so it’s just a hunch.)

What’s moving up?

Continue reading “Charting the popularity of technical communication”

Is writing API documentation going the way of the keypunch operator?

I’m in the process of writing one of several academic articles that my current profession (professor) demands of me. An essential part of the process is indulging in the diversions and distractions necessary to retain some sanity throughout the process. Today’s diversion was updating my global bibliography. Unfortunately, that idea turned out to have some depressing side effects, which I’m here to share with you.

It turns out that there’s a lot of research being done in how to automatically generate API documentation. Having written a lot of it and read a lot more, I can certainly understand the motivations. What I didn’t realize was from how many different directions the problem was being attacked. Someone even patented an idea for it (US 8819629 B2, in case you were wondering).

Continue reading “Is writing API documentation going the way of the keypunch operator?”