What Makes API Documentation AI-Ready: Code Examples

In previous posts, I explored how structural principles and writing quality standards that serve human readers also serve AI tools. Those posts focused on the prose found in all technical documentation. API documentation presents an additional challenge: teaching concepts through code examples.

I asked Claude what makes code examples easy for an AI tool to process. The patterns it identified align with established principles from instructional design and technical communication pedagogy. Once again, the fundamentals endure—whether you’re teaching human developers or enabling AI tools to extract meaning from documentation.

Code presentation patterns that help processing

Claude identified the following helpful patterns for presenting code in documentation.
I’ve added comments in italics to Claude’s conversation for additional context and clarification.

Syntax highlighting with language specified

```javascript
// Language tag makes intent clear
const result = await fetchData();
```

versus unmarked blocks requiring inference

Complete, runnable examples

  • Includes necessary imports and dependencies
  • Shows actual input and expected output
  • Avoids fragments with ellipses everywhere

I discuss the nuance of this later in the article.

Inline comments explaining non-obvious logic

```python
# Convert UTC to user's local timezone before display
local_time = utc_time.astimezone(user_tz)
```

Variable names that indicate type and purpose

  • userConfig vs data
  • isAuthenticated vs flag
  • errorMessage vs msg

Context before the code

  • For example, “To handle rate limiting, use this decorator:” sets up the problem that the following code solves.

Expected output shown

```javascript
$ npm run build
> Building for production...
> ✓ Compiled successfully in 3.2s
```

Version information

  • “For React 18+” versus assuming current version
  • Critical for fast-moving frameworks

Error scenarios that really happen

```javascript
// Generic error (not helpful for learning)
try {
  await operation();
} catch (error) {
  console.error(error);
}

// Specific errors with guidance (helpful)
try {
  await operation();
} catch (error) {
  if (error.code === 'ECONNREFUSED') {
    // Database not running - check connection string
  } else if (error.code === 'ETIMEDOUT') {
    // Network issue - check firewall rules 
  }
}
```

Realistic data shapes

```javascript
// Toy data (limited learning value)
const user = { name: 'John' };

// Actual API response structure (teaches real usage)
const user = {
  id: 'usr_1234',
  email: 'john@example.com',
  created_at: '2024-01-15T08:30:00Z',
  metadata: { plan: 'pro' }
};
```

Configuration patterns people use

```python
# Shows real deployment concern
DB_URL = os.environ.get('DATABASE_URL')
if not DB_URL:
    raise ConfigError('DATABASE_URL must be set')
```

The preceding patterns help both AI tools and human developers process code examples efficiently. They also serve developers who use assistive technology. Syntax highlighting with language tags helps screen readers identify code blocks, descriptive variable names clarify intent without visual inspection, and explicit context helps readers who can’t scan the entire page visually.

Patterns that complicate processing

Claude also provided these examples as patterns that create obstacles for AI processing. These examples also complicate human understanding:

[using] Code as images or screenshots

  • [Claude] Can’t copy, test, or analyze structure
  • Screen readers can’t access content
  • OCR introduces syntax errors

Research on API documentation shows this is a common problem: developers need to copy and test code, and images prevent that (Robillard, 2009). Screen reader users face the same barrier—code images are completely inaccessible without alt text describing the entire code block.

Incomplete fragments without context

```javascript
.then(response => {
  // do something
})
```

What’s the promise? What type is response?

Placeholder hell

POST /api/{version}/{resource}/{id}

Without explanation of valid values

Mixed concerns in one example

  • Authentication + error handling + business logic + styling combined
  • Makes it hard to identify which part demonstrates the concept

Outdated examples without warning

  • Deprecated APIs shown as current approach
  • No indication that syntax has changed

Inconsistent style within documentation

  • Some examples use async/await, others use .then()
  • Suggests copy-paste from different sources

Missing error handling

  • Happy path only
  • Doesn’t show what errors look like or how to handle

Magic values

if status == 42:  # what does 42 mean?

Excessive abstraction in examples

```typescript
const processor = new GenericDataProcessor<T>(config);
```

When a concrete example would teach more effectively


Progressive complexity: The code example spectrum

The most useful insight from my conversation with Claude was recognizing that code examples exist on a spectrum, not as a binary choice between “runnable” and “not runnable.”

Level 1: Concept snippet (Not runnable)

```javascript
cache.set('key', value, { ttl: 3600 });
```
  • Purpose: Show syntax and method signature
  • Context: API reference, inline examples in prose

Level 2: Illustrative example (Runnable with setup)

```javascript
const cache = new Cache();
cache.set('key', value, { ttl: 3600 });
```
  • Purpose: Demonstrate basic usage pattern
  • Context: Getting started guides, concept explanations

Level 3: Self-contained example (Fully runnable)

```javascript
import { Cache } from 'cache-lib';
const cache = new Cache({
  host: process.env.CACHE_HOST || 'localhost',
  port: 6379
});

try {
  await cache.connect();
  cache.set('key', value, { ttl: 3600 });
} catch (error) {
  console.error('Cache unavailable:', error);
  process.exit(1);
} finally {
  await cache.disconnect();
}
```
  • Purpose: Provide complete working implementation
  • Context: Integration guides, troubleshooting, reference implementations

Each of the three levels achieves a different learning objective. Problems arise when documentation treats them as interchangeable or defaults to one level for all situations.

Choosing the right level for your context

Research on how developers learn APIs shows they need different types of examples for different learning tasks (Robillard & DeLine, 2011). For example, quick reference lookups require different examples than initial learning, getting started, or troubleshooting tasks.

Use Level 1 (concept snippets) when:

  • Teaching what a method does
  • Showing syntax in API reference
  • Providing inline examples in explanatory prose
  • Comparing before/after or option A vs. option B

Any surrounding setup code would obscure the teaching point.

Use Level 2 (illustrative examples) when:

  • Demonstrating basic usage patterns
  • Explaining concepts without infrastructure noise
  • Showing how components interact
  • Building understanding before complexity

The minimal context helps learners focus on the principle being taught. Be sure to include just enough context to make the point of the example clear, but no more.

Use Level 3 (self-contained examples) when:

  • Providing getting started guides (first experience must work)
  • Offering troubleshooting examples (users need to verify their setup)
  • Documenting reference implementations (copy-this solutions)
  • Showing integration patterns (how pieces fit together)

Complete context necessary for learners to succeed independently.

The real-world challenge

API documentation is written in a consistent tension: developers want realistic examples that show actual usage patterns, but they also need examples that are focused enough to teach specific concepts (Watson et al., 2013). Real-world code includes security, validation, error handling, and application-specific concerns, while teaching examples must isolate concepts.

Research on API learning obstacles confirms this challenge: developers struggle most when examples either oversimplify (making transfer to real projects difficult) or include too much complexity (obscuring the concept being taught) (Robillard & DeLine, 2011).

Patterns that enhance learning

Claude provided these examples as patterns that help it learn. They also align with the established instructional design principles that help human learners:

Before/After comparisons

```javascript
// Before (problematic)
let data = getData()
console.log(data.user.name)  // Error if user is null

// After (safe)
const data = getData()
console.log(data?.user?.name ?? 'Guest')
```

Comparison examples help learners see not just what to do, but how and why the new approach improves on the old one.

If you’re new to javascript, the After example shows the use of the Optional Chaining Operator. Adding the question mark after the variable name in this context implicitly tests for null values and returns undefined instead of generating an exception as would happen in the Before example.

Progressive complexity

```javascript
// Basic usage
const client = new APIClient();

// With authentication
const client = new APIClient({
  apiKey: process.env.API_KEY
});

// With full configuration
const client = new APIClient({
  apiKey: process.env.API_KEY,
  timeout: 5000,
  retries: 3,
  onError: handleError
});
```

Building from simple to complex follows cognitive load principles: master the basics before adding complexity (Sweller, 1988).

Common pitfalls highlighted

```python
# ❌ Don't do this - modifies shared state
DEFAULT_CONFIG['timeout'] = 100

# ✓ Do this instead - creates new instance 
config = {**DEFAULT_CONFIG, 'timeout': 100}
```

Showing what not to do helps learners avoid common mistakes. Research shows people learn effectively from errors when the correct approach is shown alongside (Piaget, 1970).

Links to complete working examples

  • “See complete implementation in GitHub repo”
  • Provides full context while keeping documentation focused

This approach offers the ability to provide running code examples in a separate context without complicating the examples shown in the documentation.


Real-world elements that hinder learning

Claude then presented examples that make it more difficult to see how something works or which line of the example is the important one to notice.

Generic validation noise

```javascript
// ... before getting to actual logic
if (!param1) throw new Error('param1 required');
if (!param2) throw new Error('param2 required');
if (!param3) throw new Error('param3 required');
// ... before getting to actual logic
```

Note that the validation exists, show it once, and then omit in subsequent examples.

Framework-specific boilerplate

```javascript
// Don't need all this in every example
import React, { useState, useEffect } from 'react';
import { Container, Grid, Paper } from '@mui/material';
import { useTheme } from '@mui/styles';
export default function MyComponent() {
  // Actual example starts here
}
```

Exhaustive error handling for every call

  • Show it for risky operations, acknowledge it’s needed elsewhere, omit for clarity.

Cognitive load research supports this approach: including exhaustive error handling in every example creates extraneous cognitive load that interferes with learning the main concept (Pollock et al., 2002).

Why these patterns work

Educational researchers have studied how people learn from examples for decades. The principles they’ve established apply directly to code examples in API documentation.

Cognitive load theory explains why some examples help learning while others hinder it. When examples contain too much information at once or mix multiple concepts, they overload working memory and make learning harder (Sweller, 1988; Pollock et al., 2002). This explains why developers struggle with complex examples that try to teach everything at once.

The patterns Claude identified follow these principles that educational researchers have validated:

  • Clear over clever – Straightforward demonstrations teach more effectively than elegant abstractions
  • Progressive complexity – Start simple, add layers of sophistication incrementally (Sweller, 1988)
  • Concrete before abstract – Show specific instances before generalizations (Piaget, 1970)
  • Authentic context – Real-world scenarios engage learners more than toy examples
  • Explicit instruction – Don’t make learners infer what’s important (Skinner, 1954)

Cognitive load theory calls mixed concerns “extraneous cognitive load”—information that doesn’t contribute to learning the target concept but demands processing resources anyway (Pollock et al., 2002). Developers trying to learn authentication don’t need to also process styling decisions in the same example.

Educational research on scaffolding—providing support structures that help learners progress from simple to complex understanding—explains why the three-level spectrum works. Presenting all complexity at once overwhelms working memory, while building understanding incrementally allows learners to master each level before adding more (Sweller, 1988).

Research on how people process documentation shows that readers make quick relevance decisions when searching for information (Rouet, 2006). Software developers especially need to evaluate whether a code example contains what they’re looking for without reading the entire thing (Robillard & DeLine, 2011). Clear structure, explicit comments, and progressive complexity help both human developers and AI tools make these assessments efficiently.

Bringing it together: Code examples for all readers

Parts 1 and 2 of this series showed that AI tools benefit from the same structural and writing principles that serve human readers. Code examples follow the same pattern: the pedagogical principles that help humans learn from code examples also help AI tools process and extract meaning from them.

Clear syntax highlighting, progressive complexity, explicit comments, authentic error handling—these aren’t AI-specific requirements. They’re teaching fundamentals from decades of educational research (Sweller, 1988; Pollock et al., 2002; Rouet, 2006), confirmed again by what makes code processable for AI tools.

These patterns also serve developers who use assistive technology. Screen readers need semantic markup to identify code blocks. Developers with cognitive disabilities benefit from progressive complexity that limits working memory demands. Text-based code instead of images serves both AI tools and developers using screen readers.

The fundamentals persist. Whether you’re teaching a junior developer, supporting a screen reader user, or enabling an AI tool to process your examples, the same principles apply: clarity, context, progressive complexity, and authentic learning scenarios.

Further Reading

If you’re interested in learning more about the research foundation for code examples:

Cognitive Load Theory and Learning:

Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. — Foundational work on how working memory limitations affect learning.

Pollock, E., Chandler, P., & Sweller, J. (2002). Assimilating complex information. Learning and Instruction, 12(1), 61–86. — Application of cognitive load principles to complex learning tasks.

Document Processing and Reading

Rouet, J.-F. (2006). The Skills of Document Use: From Text Comprehension to Web-Based Learning. Lawrence Erlbaum Associates. — How readers process and evaluate documents when searching for information.

Learning Theory

Piaget, J. (1970). Science of education and the psychology of the child. New York: Orion Press. — Foundational work on how learners progress from concrete to abstract understanding.

Skinner, B. (1954). The science of learning and the art of teaching. Harvard Educational Review. — Principles of explicit instruction and feedback.

API Documentation Research

Robillard, M. P. (2009). What makes APIs hard to learn? Answers from developers. IEEE Software, 26(6), 27–34. — Research on obstacles developers face when learning APIs.

Robillard, M. P., & DeLine, R. (2011). A field study of API learning obstacles. Empirical Software Engineering, 16(6), 703–732. — Field research on how developers learn and use APIs.

Watson, R. B., Stamnes, M., Jeannot-Schroeder, J., & Spyridakis, J. H. (2013). API documentation and software community values: A survey of open-source API documentation. In Proceedings of the 31st ACM International Conference on Design of Communication (pp. 165–174). — Analysis of documentation practices in open-source API projects.

Brandt, J., Guo, P. J., Lewenstein, J., Dontcheva, M., & Klemmer, S. R. (2009). Two studies of opportunistic programming: Interleaving web foraging, learning, and writing code (pp. 1589–1598). In the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM.

Brandt, J., Guo, P. J., Lewenstein, J., & Klemmer, S. R. (2008). Opportunistic programming: How rapid ideation and prototyping occur in practice (pp. 1–5). In the Proceedings of the 4th International Workshop on End-User Software Engineering. ACM.

Leave a Reply