Dachary Carey, a senior technical writer at MongoDB, cares. A lot! She has been running research into how AI tools interact with technical documentation. Her approach is experimental: she probes how AI agents actually behave when they go reading docs, rather than taking reports from developers and other experts at face value.
When Dachary first published her findings, they caught the technical writing community off guard. Her follow-up research has shown how pervasive and inconsistent the behavior is. While some tools indicate how much of the content they read, most don’t. It’s not a quirk. When asked, Claude confirmed that it’s a common practice, in part to minimize token use.
For tech writers, that’s a problem worth understanding. When an AI agent skips content and nothing flags it, the AI tool’s answer comes back confident and looking complete. Neither the end user nor the developer who built the tool has any way of knowing what got left out because the failure mode is silent. In fact, it’s not even considered a failure.
Dachary’s research revealed that getting data from a web page to an AI tool’s answer involves more steps and more moving parts than many technical writers realized. That complexity is worth unpacking, because buried in it is a question with some uncomfortable answers: who in that process actually has a reason to care whether the AI gets it right?
Continue reading “Who cares if AI reads your docs correctly?”