Reader goals – Reading to learn

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Learn (to use Later or to Apply with Other Information)

Reading to Learn to Use Later, without a particular or immediate task in mind, is similar to what Sticht described as Reading to Learn [1]. The critical distinctions between this and Reading to learn to do now and Reading to learn to do a task later are:

  • The reading task in your content is a subtask of the reader’s ultimate goal of learning a new concept or skill.
  • The reader does not have a specific goal beyond an increase in knowledge.

An example of reading that facilitates this type of reader goal, where the goals are accomplished after using the website or in conjunction with using other information, would be websites and books about design principles so as to use the information later when designing a website. The connection between the reading and the ultimate application of the knowledge is too distant to have a meaningful and identifiable cause-effect relationship.

In a Reading to Learn to Use Later goal, as shown in the figure, the reader reads information from many different locations, of which your content might be only one information source. While similar to the Reading to Be Reminded interaction type, in this interaction type, the reader is seeking more information from the content. In this interaction, the content is new and the reader might consult multiple sources to accumulate the information required to reach the learning goal.

Reading to learn
Reading to learn

It is difficult to measure how well readers are accomplishing their ultimate learning goal when their interaction with the website may be one step of many and they might not use the information until much later. However, it is reasonable to collect information about the immediate experience. For example, the content could encourage readers to interact with the page as a way to provide feedback to the reader and to collect information about the readers’ experience. The content could also include quizzes, and links or affordances such as prompts to share the content with a social network.


[1] Sticht, T.G., Fox, L.C., Hauke, R.N., Zapf, D.W.: The Role of Reading in the Navy. DTIC Document (1977)

Reader goals – Reading to do a task later

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Do a Task Outside the Website Later

When reading to do a task outside the website, readers’ accomplish their goals outside of the web and after they’ve read the content that provides the prerequisite learning for the task. Some examples of such interactions include:

  • The United States Internal Revenue Service’s instructions on filling out the Form 1040[1].
  • The Aircraft Owners and Pilots Association (AOPA)’s information about performing crosswind landings [2].
  • An article about how to do well in a job interview [3].

The figure shows the relationship between the readers’ interaction with the content and the task they hope to accomplish.

Reading to do a task outside the website later
Reading to do a task outside the website later

When authoring content for this type of interaction, the web usability goals include search-engine optimization. Term and vocabulary alignment is an important and easy way to make the content easy for the reader to discover.

Of course, providing meaningful, interesting, and helpful content is critical. In this type of interaction, understanding the nature and relationship of the content and the task are key elements towards getting meaningful feedback on how well your content is doing in those categories. Because this type of interaction consists of two, temporally-separate events–reading/leaning and doing–it might be more effective to assess them separately. For example, you could  include affordances in the content that test the intermediate goal of learning the content before the task is attempted and to consider using methods to collect and coordinate information about task completion.

Consider the case of a driver’s license test-preparation site. The site could include a quiz for the reader (and the site manager and stakeholders) to determine the readers’ learning and the content’s effectiveness in the short term. Perhaps also providing feedback to the reader on areas that require additional study. The task, passing the written driver’s license test in this example, would occur later and be measured at the Department of Licensing. The two experiences could be related somehow to evaluate the effectiveness of the test preparation site and the actual task of passing the license exam.

In this example, asking the reader about satisfaction could also be done during and after readers’ interaction with the content to understand how they feel about that, as long as the questions did not interfere with the learning task.


[1] http://www.irs.gov/pub/irs-pdf/i1040gi.pdf

[2] http://flighttraining.aopa.org/students/solo/skills/crosswind.html

[3] http://mashable.com/2013/01/19/guide-to-job-interview/

Reader goals – Reading to do now

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Do a Task Outside the Website Now

Readers who interact with a website to do a task outside the website seek to complete their task while reading the website for the information they need. Examples of such websites include sites that describe how to repair a household appliance or how to cook a meal. The figure shows the interaction between the readers’ goal and their interaction with the content.

Reading to do a task outside of the website now
Reading to do a task outside of the website now

In this type of interaction, readers interact with the content after they decide to perform the task and end their interaction with the content when they feel confident enough to complete the task without additional information. At that point, they continue towards their goal without the content. Depending on the complexity and duration of the task, readers might return to the content several times during the task, but the key aspect of this interaction with the content is that it does not coincide with task completion.

This interaction can influence several aspects of the design. For example, readers might like to print the web content to save or refer to later, so it might be inconvenient to have the web content spread over several web pages. However, because readers might stop interacting with the content at any point, the content could be divided into individual pages of logical steps with natural breaks.

Because readers stop interacting with the content before the complete their task, asking for information about their task when they leave the content might be confusing because they haven’t finished it, yet. On the other hand, asking about the content might be reasonable.

Tracking progress, success, and satisfaction for this type of interaction requires coordination with the content design. The task and subtask flows must be modeled in the content’s design so that the instrumentation used to collect data about the interaction coordinates with the readers’ interaction. Because readers can leave the content before they read all of the content and still complete their task successfully, traditional web-based metrics such as average time-on-page and path are ambiguous with respect to the readers’ experiences. It is impossible, for example, to know if having readers exit a procedure on the first step is good or bad without knowing whether they are also dissatisfied or unsuccessful with the content. Ideally, collecting information about their experience will come shortly after they accomplish their goal. For example, posting a review of their recipe on social media after they finish.

Reader goals – Reading to be reminded

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to be Reminded

Reading to be reminded, or Reading to Do Lite, occurs when readers visit an informational site with the confidence that they already know most of what they need to know about a topic to complete their task, but they just need a refresher. Readers use the website as a form of offline information storage that they may use either online or elsewhere. By knowing the information is available online, readers are confident that they don’t need to remember the details, just where they can find them. Brandt et al. [1] noticed this pattern while observing software developers who “delegated their memory to the Web, spending tens of seconds to remind themselves of syntactic details of a concept they new [sic] well.”

The figure shows how interactions of this type might relate to a reader’s task.

Reading to be reminded
Reading to be reminded

Because, as Redish [2] says, readers will read “until they’ve met their need,” readers will spend as little time in the site as they need interacting with the content. Once they have been reminded of the information they need, they will return to their original task.

Topic design principles needed to serve this interaction include making the content easy to find, navigate, and read. Visible headings and short “bites” and “snacks” of information [2] are well suited to such a goal. However, my research in developer documentation says that these guidelines depend on the specific context–a reminder to know your audience. Knowing your audience is also key to using the terms they will recognize.

Website-based metrics are not particularly helpful in determining the quality of the readers’ interactions. A good time-on-page value, for example, might be short–to the point of appearing to be a bounce, or it might be long. The number of times a page is viewed  also has an ambiguous meaning when it comes to understanding the quality of the readers’ interactions.

At the same time, the readers’ engagement and focus on their primary task (the one that sent them to this content) means asking qualitative information about their experience is likely to be seen as a distraction. Asking about the reader’s experience should be done soon after the interaction and with as brief of a satisfaction questionnaire as possible—perhaps only one question, such as “Did this topic help you?”


[1] Brandt, J., Guo, P.J., Lewenstein, J., Dontcheva, M., Klemmer, S.R.: Two Studies of Op-portunistic Programming: Interleaving Web Foraging, Learning, and Writing code. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 1589–1598 (2009)

[2] Redish, J.: Letting Go of the Words: Writing Web Content that Works (2nd ed.). Morgan Kaufmann, Elsevier, Waltham, MA (2012)

Reader goals – Reading to do here

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information WebsitesSee the entire series

Reading to Do Here

Reading to accomplish a task in the website, or Reading to Do Here, is characterized by readers interacting with a page in a website to accomplish a specific task through the site. In this interaction, the readers’ goal is to complete the task in the web site. Some familiar examples might include registering for a library account, subscribing to an online newsletter, or renewing a business license.

Readers interact with the content or the site shortly after they decide to accomplish the task and they leave shortly after they finish. The figure illustrates the interaction in such a task.

 

Reading to Do Here
Reading to Do Here

Readers who use content in this way will want to find the page to help them accomplish the task as quickly as possible and then complete the task as efficiently as possible. While they will want to know that they have successfully completed the task before they leave the website, after they leave the website, they generally won’t remember much about the experience unless it was especially negative or positive.

The figure shows a very common type of web interaction. Web usability guidelines describe the design implications that depend on the site, context, and audience in many texts. Because the readers’ task is performed almost entirely in the context of the web interaction, measuring the success of the interaction is easily accomplished through the site without imposing on the reader. The web server can collect data concerning the time spent in the interaction; the rate of successful operations (e.g., registrations, applications, or whatever the interaction is designed to accomplish); and the path through the interaction (e.g., back tracks, sidetracks, and early exits). Requests for qualitative feedback should occur soon after the interaction so readers’ remember the interaction. While this interaction model is intended for informational sites, it also matches the interaction model of commercial sites, such as shopping or other e-commerce sites. As such, many of the analytics tools and instruments that work in those contexts will also work in this interaction model.

What to measure?

Creative Commons licenseThat measuring API documentation is difficult is one of the things I’ve learned from writing developer docs for more than 11 years. Running the study for my dissertation gave me a detailed insight as to some of the reasons for this.

The first challenge to overcome is answering the question, “What do you want to measure?” A question that is followed immediately by, “…and under what conditions?” Valid and essential, but not simple, questions. Stepping back from that question, and a higher-level question comes into view, “What’s the goal?” …of the topic? …of the content set? and then back to the original question, of the measurement?

For my dissertation, I spent considerable effort scoping the experiment down to something manageable, measurable, and meaningful–ending up at the relevance decision. Clearly there is more to the API documentation experience than just deciding if a topic is relevant, but that’s a pivotal moment in the content experience. The relevance decision also seemed to be the most easily identifiable, discrete event that I could identify in the overall API reference topic experience. It’s a pivotal point in the experience, but by no mean the only one.

The processing model I used was based on the TRACE model presented by Rouet (2006). Similar cognitive-processing models were also identified in other API documentation and software development research papers. In this model, the experiment focuses on step 6.

The Task-based Relevance Assessment and Content Extraction (TRACE Model) of document processing Rouet, J.-F. (2006). The Skills of Document Use: From Text Comprehension to Web-Based Learning (1st ed.). Lawrence Erlbaum Associates.
The Task-based Relevance Assessment and Content Extraction (TRACE Model) of document processing
Rouet, J.-F. (2006). The Skills of Document Use: From Text Comprehension to Web-Based Learning (1st ed.). Lawrence Erlbaum Associates.

Even in this context, my experiment studies a very small part of the overall cognitive processing of a document and an even smaller part of the overall task of information gathering to solve a larger problem or to answer a specific question.

To wrap this up by returning to the original question, that is…what was the question?

  1. The goal of the topic is to provide information that can be easily accessible to the reader.
  2. The easily accessible goal is measured by the time it takes for the reader to identify whether the topic provides the information they seek or not.
  3. The experiment simulates the readers task by providing the test participants with programming scenarios in which to evaluate the topics
  4. The topics being studied are varied randomly to reduce order effects and bias and participants see only one version of the topics to bias their experience by seeing other variations.

In this experiment, other elements of the TRACE model are managed by or excluded from the task.

API reference topic study – thoughts

Last month, I published a summary of my dissertation study and I wanted to summarize some of the thoughts that the study results provoked. My first thought was that my experiment was broken. I had four distinctly different versions of each topic yet saw no significant difference between them in the time participants took to determine the relevance of the topic to the task scenario. Based on all the literature about how people read on the web and the importance of headings and in-page navigation cues in web documents, I expected to see at least some difference. But, no.

The other finding that surprised me was the average length of time that participants spent evaluating the topics. Whether the topic was relevant or not, participants reviewed a topic for an average of about 44 seconds before they decided its relevance. This was interesting for several reasons.

  1. In web time, 44 seconds is an eternity–long enough to read the topic completely, if not several times. Farhad Manjoo wrote a great article about how people read Slate articles online, which agrees with the widely-held notion that people don’t read online. However, API reference topics appear to be different than Slate articles and other web content, which is probably a good thing for both audiences.
  2. The average time spend reading a reference topic to determine its relevance in my study was the same whether the topic was relevant to the scenario or not. I would have expected them to be different–the non-relevant topics taking longer than the relevant ones on the assumption that readers would spend more time looking for an answer. But no. They seemed to take about 44 seconds to decide whether the topic would apply or not in both cases.

While, these findings are interesting, and bear further investigation, they point out the importance of readers’ contexts and tasks when considering page content and design. In this case, changing one aspect of a document’s design can improve one metric (e.g. information details and decision speed) at the cost of degrading others (credibility and appearance).

The challenges then become:

  1. Finding ways to understand the audience and their tasks better to know what’s important to them
  2. Finding ways to measure the success of the content in helping accomplishing those tasks

I’m taking a stab at those in the paper I’ll be presenting at the HCII 2015 conference, next month.

Checklist for technical writing

Devin Hunt's Design hierarchy
Devin Hunt’s design hierarchy

Devin Hunt posted this figure from “Universal Principles of Design,” which is an adaptation of Maslow’s Hierarchy of Needs for design.  It seemed like they could also apply to technical writing. Working up from the bottom…

Functionality

As with a product, technical content must work. The challenge is knowing what that actually means and how to measure it. Unfortunately, for a lot of content, this is fuzzy. I’m presenting a paper next month that should help provide a framework for defining this, but, as with Maslow’s triangle, you must do this before you can hope to accomplish the rest.

For technical content, like any product, you must know your audience’s needs to know what works means. At the very least, the content should support the user’s usage scenarios, such as getting started or onboarding, learning common use cases, having reference information to support infrequent, but important, usage or application questions. What this looks like is specific to the documentation and product.

Reliability

Once you know what works means, then you can tell if it does and determine if it does so consistently. Again, this requires knowledge of the audience–not unlike product design.

This is tough to differentiate from functionality, except that it has the dimension of providing the functionality over time. Measuring this is a matter of tracking the functionality metrics over time.

Usability

Once you know what content that works looks like, you can make sure it does so consistently and does so in a way that is as effortless as possible.

Separating usability from functionality is a tough one  in the content case. If content is not usable, does it provide functionality? If you look close, you could separate them out. For example, a content set can have all the elements that a user requires but they can be difficult to find or navigate. Likewise, the content might all exist, but be accessible in a way that is inconvenient or disruptive to the user. As with product development, understanding the audience is essential, as is user testing to evaluate this.

Proficiency

Can readers become expert at using the documentation? One could ask if they should become experts, but in the case of a complex product that has a diverse set of features and capabilities, it’s not too hard to imagine having a correspondingly large set of documentation to help users develop expertise.

What does this look like in documentation? At the very least, the terms used the documentation should correspond to the audience’s vocabulary to facilitate searching for new topics.

Creativity

Not every product supports creativity, nor does every documentation set. However, those that do make the user feel empowered and are delightful to use. A noble, albeit difficult, goal to achieve, but something worthy of consideration.

This might take the form of community engagement in forums, or ongoing updates and tips to increase the value of the documentation and the product to the audience.

API reference topic study – summary results

During November and December, 2014, I ran a study to test how varying the design and content of an API reference topic influenced participants’ time to decide if the topic was relevant to a scenario.

Summary

  • I collected data from 698 individual task scenarios were from 201 participants.
  • The shorter API reference topics were assessed 20% more quickly than the longer ones, but were less credible and were judged to have a less professional appearance than the longer ones.
  • The API reference topics with more design elements were not assessed any more quickly than those with only a few design elements, but the topics with more design elements were more credible and judged to have a more professional appearance.
  • Testing API documentation isn’t that difficult (now that I know how to do it, anyway).

The most unexpected result, based on the literature, was how the variations of visual design did not significantly influence the decision time. Another surprise was how long the average decision time was–almost 44 seconds, overall. That’s more than long enough to read the entire topic. Did they scan or read? Unfortunately, I couldn’t tell from my study.

Details

The experiment measured how quickly participants assessed the relevance of an API reference topic to a task-based programming scenario. Each participant was presented with four task scenarios:  There were two scenarios for each task: one to which the topic applied and another to which the topic was not relevant and each participant saw two of each. There were four variations of each API reference topic; however, each participant only saw one–they had no way to compare one variation to another.

The four variations of API reference topics resulted from two levels of visual design and two levels of the amount of information presented in the topic.

Low visual design High visual design Findings:
Information variations
High
information
copy_ld_hi copy_hd_hi
  • Higher credibility
  • More professional appearance
Low
information
copy_ld_lo copy_hd_lo
  • Lower credibility
  • Less professional appearance
Findings:
Design variations
  • Faster decision time
  • Lower credibility
  • Less professional appearance
  • Slower decision time
  • Higher credibility
  • More professional appearance

Continue reading “API reference topic study – summary results”

Is it really just that simple?

Photo of a tiny house. Is less more or less or does it depend?
A tiny house. Is less more or less or does it depend?
After being submerged in the depths of my PhD research project since I can’t remember when, I’m finally able to ponder its nuance and complexity. I find that I’m enjoying the interesting texture that I found in something as mundane as API reference documentation, now that I have a chance to explore and appreciate it (because my dissertation has been turned in!!!!). It’s in that frame of mind that I consider the antithesis of that nuance, the “sloganeering” I’ve seen so often in technical writing.

Is technical writing really so easy and simple that it can be reduced to a slogan or a list of 5 (or even 7) steps? I can appreciate the need to condense a topic into something that fits in a tweet, a blog post, or a 50-minute conference talk. But, is that it?

Let’s start with Content minimalism or, in slogan form, Less is more! While my research project showed that less can be read faster (fortunately, or I’d have a lot more explaining to do), it also showed that less is, well, in a word, less, not more. It turns out that even the father of Content Minimalism, John Carroll, agrees. He says in his 1996 article, “Ten Misconceptions about Minimalism,”

In essence, we will argue that a general view of minimalism cannot be reduced to any of these simplifications, that the effectiveness of the minimalist approach hinges on taking a more comprehensive, articulated, and artful approach to the design of information.

In the context of a well considered task and audience analysis, it’s easy for the writer to know what’s important and focus on it–less can be more useful and easier to grok. He says later in that same article,

Minimalist design in documentation, as in architecture or music, requires identifying the core structures and content.

In the absence of audience and task information, less can simply result in less when the content lacks the core structures and content and misses the readers’ needs.   More can also be less, when writers try to cover those aspects by covering everything they can think of (so-called peanut-butter documentation that covers everything to some unsatisfying uniform depth).

For less to be more, it has to be well informed. Its the last part that makes it a little complicated.


Carroll, John, van der Meij, Hans (1996): Ten Misconceptions about Minimalism. IEEE Transactions on Professional Communication, 39(2), 72-86.