When AI fixes everything (except what matters)

Imagine that you’ve deployed AI to generate your technical documentation. The tool promised to revolutionize your content workflow, and honestly, it delivered on speed. What used to take days now happens in minutes.

Now, fast-forward six months to find customer support is drowning in confused user tickets. Social media mentions of your product are increasingly sarcastic. Sales is asking pointed questions about why adoption rates are dropping, and nobody can figure out what changed. The product is as solid as ever.

In this post, I want to provide a more optimistic outcome and follow up on a recent post that ended on a scenario that could lead to such a story.

The invisible problem

When you don’t have reliable documentation analytics, problems announce themselves through every channel except the actual source. Without reliable analytics, your first clue that AI is producing unhelpful developer docs won’t be a dashboard alert. It’ll be angry developers posting screenshots of your broken code examples on social media.

Remember, automating your processes accelerates them in whatever direction they were already heading. If your current documentation process and performance are unmeasured and reactive, you won’t know if AI is helping you out or not, until long after the damage has been done.

It just keeps producing content that checks all the boxes while serving no one.

A different path forward

The dystopian scenario above isn’t inevitable. But it requires resisting the “deploy AI everywhere immediately” impulse and taking a more methodical approach.

Start with a content inventory

Before AI touches anything, catalog what you currently have. Not just “we have 47 help articles,” but what each piece of content is supposed to accomplish. This isn’t as tedious as it sounds because you’re looking for patterns, not perfection.

Apply a simple taxonomy

Group your content by what readers are trying to do:

  • Get started quickly (onboarding)
  • Complete a specific task (procedures)
  • Understand concepts (learning)
  • Look up specific information (reference)
  • Troubleshoot problems (support)

This maps roughly to reader goals: doing now, learning for later, being reminded, and getting unstuck. Each category has different success metrics.

Track performance by goal, one category at a time

Don’t try to measure everything at once. Pick your highest-impact content category and establish baseline metrics that matter:

  • For procedures: task completion rates, not time on page
  • For reference: quick-exit success (people finding what they need fast)
  • For learning content: progression through related topics
  • For troubleshooting: resolution rates

The key is to start small and build measurement habits before involving AI.

Deploy AI incrementally

Once you can measure what works in one content category, then you can experiment with AI assistance. Generate variations, test them against your established metrics, and learn what AI does well versus what works better with human oversight. As a bonus, if you find something that’s not working, you can focus on fixing that as you deploy your AI tools to the task.

Maybe AI excels at generating multiple code examples but struggles with contextual explanations. Maybe it’s great at updating reference material but terrible at onboarding sequences. You won’t know which is which without a reliable measurement system in place before you start.

The organizations that succeed by applying AI to technical documentation aren’t the ones that deploy it most aggressively. They’re the ones that deploy it most thoughtfully, with clear feedback mechanisms that tell them whether they’re moving in the right direction.

The tl;dr:

  • Start measuring what you have.
  • Apply AI to what you understand.
  • Expand gradually based on what you learn.

It’s not as exciting as “AI will revolutionize everything,” but it’s a lot more likely to make things work better.

Leave a Reply