Studies show…

folding-map-360382_640In the quest to do more with less, one method I’ve seen used to get the job done more quickly is to rely on best practices and studies. It’s not that referring to best practices or studies is bad, but they should be the starting point for decisions, not the final word. Why?

Your context is unique

This was made obvious in my dissertation study in which the effect seen by applying best practices or not depended on what was being measured (i.e. what mattered). In the API reference topics I studied, whether using headings and design elements as suggested by the prevailing best practices or not made no difference in reading performance, but they made a significant difference in how the topics were perceived by the reader.

Those results applied to the context of my experiment and they might apply to other, similar contexts, but you’d have to test them to know for sure. Does it matter? You tell me. That, too, depends on the context.

A study showed…

A report on the effect that design variations had on a news site home page came out recently to show how a modern interface had better engagement than a more traditional, image+text interface. However, reader of the latter interface had better comprehension of the articles presented.

Since it relates design, comprehension, and engagement, I thought it was quite interesting. I skimmed the actual study, which seemed reasonable. I’m preparing myself, however, for what the provocative nature of the headline in the blog article is likely to produce.–the time when it will be used in the inevitable “studies show…” argument. It has all the components of great “studies show” ammo: it refers to modern design (timely), has mixed results (so you can quote the result that suits the argument), it has a catchy headline (so you don’t need to read the article or the report).

Remember, your context is unique

Starting from other studies and “best practices” is great. But, because your context is, invariably, unique, you’ll still need to test and validate.

When it comes to best practices and applying other studies, trust but verify.

Reader goals – Overview

readingbookThe reader goals in this series describe the different reader and content interactions with informational content to help content developers and site managers provide the content that best accommodates their audiences’ goals. The underlying assumptions behind these interactions are twofold:

  • Readers come to informational content with some goal in mind.
  • The readers’ goals might not coincide with their content experience.

Understanding readers’ goals is critical to providing the content and experience that helps readers achieve their goals.  Understanding the relationship between reader goals and content also informs the feedback that the reader can provide and the best time to collect that feedback.

Informational content

The interactions described in this series are based on (and primarily intended for) informational content that has no specific commercial goal. These interactions might also apply to sites with commercial goals; however, they were developed for sites with no specific commercial goal.

Audience analysis

The reader-content relationships described in this series are best discovered through audience research.  Having these models in mind can help inform and direct your research. These models can also help identify and characterize the patterns you observe in your audience research.

At the same time, as a writer, I realize that it isn’t always possible to conduct audience research before the content is required. In those cases, these models provide a reasonable starting point from which to later collect data that can be used to refine your model and content. By taking what you know about your audience, you can select an interaction model that fits what you know and use that model as the basis for your initial draft of the content and its metrics.

Feedback and metrics

A key part of these interactions is to help identify what type of feedback can be collected and the best time to collect it. From the readers’ perspectives, the content the means to accomplish a goal, therefore, goal-related feedback is the most representative measure of content effectiveness.

When readers’ goals coincide with completing the content, as in the Reading to do here case, collecting goal-related feedback at the end of the content makes perfect sense. However, we found that much of the content found in  informational sites has a different relationship with readers’ goals. Recognizing this and altering the feedback instruments to match can improve the quality of the feedback on the content.

Finally, the interaction descriptions in this series are somewhat vague when it comes to specific instrumentation and metrics. This is for two reasons. First, the best instrument to use is very context specific. Second, because of this, we haven’t studied this aspect enough to make general recommendations. However, we’re working on it.


This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reader goals – Reading to learn

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Learn (to use Later or to Apply with Other Information)

Reading to Learn to Use Later, without a particular or immediate task in mind, is similar to what Sticht described as Reading to Learn [1]. The critical distinctions between this and Reading to learn to do now and Reading to learn to do a task later are:

  • The reading task in your content is a subtask of the reader’s ultimate goal of learning a new concept or skill.
  • The reader does not have a specific goal beyond an increase in knowledge.

An example of reading that facilitates this type of reader goal, where the goals are accomplished after using the website or in conjunction with using other information, would be websites and books about design principles so as to use the information later when designing a website. The connection between the reading and the ultimate application of the knowledge is too distant to have a meaningful and identifiable cause-effect relationship.

In a Reading to Learn to Use Later goal, as shown in the figure, the reader reads information from many different locations, of which your content might be only one information source. While similar to the Reading to Be Reminded interaction type, in this interaction type, the reader is seeking more information from the content. In this interaction, the content is new and the reader might consult multiple sources to accumulate the information required to reach the learning goal.

Reading to learn
Reading to learn

It is difficult to measure how well readers are accomplishing their ultimate learning goal when their interaction with the website may be one step of many and they might not use the information until much later. However, it is reasonable to collect information about the immediate experience. For example, the content could encourage readers to interact with the page as a way to provide feedback to the reader and to collect information about the readers’ experience. The content could also include quizzes, and links or affordances such as prompts to share the content with a social network.


[1] Sticht, T.G., Fox, L.C., Hauke, R.N., Zapf, D.W.: The Role of Reading in the Navy. DTIC Document (1977)

Reader goals – Reading to do a task later

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Do a Task Outside the Website Later

When reading to do a task outside the website, readers’ accomplish their goals outside of the web and after they’ve read the content that provides the prerequisite learning for the task. Some examples of such interactions include:

  • The United States Internal Revenue Service’s instructions on filling out the Form 1040[1].
  • The Aircraft Owners and Pilots Association (AOPA)’s information about performing crosswind landings [2].
  • An article about how to do well in a job interview [3].

The figure shows the relationship between the readers’ interaction with the content and the task they hope to accomplish.

Reading to do a task outside the website later
Reading to do a task outside the website later

When authoring content for this type of interaction, the web usability goals include search-engine optimization. Term and vocabulary alignment is an important and easy way to make the content easy for the reader to discover.

Of course, providing meaningful, interesting, and helpful content is critical. In this type of interaction, understanding the nature and relationship of the content and the task are key elements towards getting meaningful feedback on how well your content is doing in those categories. Because this type of interaction consists of two, temporally-separate events–reading/leaning and doing–it might be more effective to assess them separately. For example, you could  include affordances in the content that test the intermediate goal of learning the content before the task is attempted and to consider using methods to collect and coordinate information about task completion.

Consider the case of a driver’s license test-preparation site. The site could include a quiz for the reader (and the site manager and stakeholders) to determine the readers’ learning and the content’s effectiveness in the short term. Perhaps also providing feedback to the reader on areas that require additional study. The task, passing the written driver’s license test in this example, would occur later and be measured at the Department of Licensing. The two experiences could be related somehow to evaluate the effectiveness of the test preparation site and the actual task of passing the license exam.

In this example, asking the reader about satisfaction could also be done during and after readers’ interaction with the content to understand how they feel about that, as long as the questions did not interfere with the learning task.


[1] http://www.irs.gov/pub/irs-pdf/i1040gi.pdf

[2] http://flighttraining.aopa.org/students/solo/skills/crosswind.html

[3] http://mashable.com/2013/01/19/guide-to-job-interview/

Reader goals – Reading to do now

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Do a Task Outside the Website Now

Readers who interact with a website to do a task outside the website seek to complete their task while reading the website for the information they need. Examples of such websites include sites that describe how to repair a household appliance or how to cook a meal. The figure shows the interaction between the readers’ goal and their interaction with the content.

Reading to do a task outside of the website now
Reading to do a task outside of the website now

In this type of interaction, readers interact with the content after they decide to perform the task and end their interaction with the content when they feel confident enough to complete the task without additional information. At that point, they continue towards their goal without the content. Depending on the complexity and duration of the task, readers might return to the content several times during the task, but the key aspect of this interaction with the content is that it does not coincide with task completion.

This interaction can influence several aspects of the design. For example, readers might like to print the web content to save or refer to later, so it might be inconvenient to have the web content spread over several web pages. However, because readers might stop interacting with the content at any point, the content could be divided into individual pages of logical steps with natural breaks.

Because readers stop interacting with the content before the complete their task, asking for information about their task when they leave the content might be confusing because they haven’t finished it, yet. On the other hand, asking about the content might be reasonable.

Tracking progress, success, and satisfaction for this type of interaction requires coordination with the content design. The task and subtask flows must be modeled in the content’s design so that the instrumentation used to collect data about the interaction coordinates with the readers’ interaction. Because readers can leave the content before they read all of the content and still complete their task successfully, traditional web-based metrics such as average time-on-page and path are ambiguous with respect to the readers’ experiences. It is impossible, for example, to know if having readers exit a procedure on the first step is good or bad without knowing whether they are also dissatisfied or unsuccessful with the content. Ideally, collecting information about their experience will come shortly after they accomplish their goal. For example, posting a review of their recipe on social media after they finish.

Reader goals – Reading to be reminded

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to be Reminded

Reading to be reminded, or Reading to Do Lite, occurs when readers visit an informational site with the confidence that they already know most of what they need to know about a topic to complete their task, but they just need a refresher. Readers use the website as a form of offline information storage that they may use either online or elsewhere. By knowing the information is available online, readers are confident that they don’t need to remember the details, just where they can find them. Brandt et al. [1] noticed this pattern while observing software developers who “delegated their memory to the Web, spending tens of seconds to remind themselves of syntactic details of a concept they new [sic] well.”

The figure shows how interactions of this type might relate to a reader’s task.

Reading to be reminded
Reading to be reminded

Because, as Redish [2] says, readers will read “until they’ve met their need,” readers will spend as little time in the site as they need interacting with the content. Once they have been reminded of the information they need, they will return to their original task.

Topic design principles needed to serve this interaction include making the content easy to find, navigate, and read. Visible headings and short “bites” and “snacks” of information [2] are well suited to such a goal. However, my research in developer documentation says that these guidelines depend on the specific context–a reminder to know your audience. Knowing your audience is also key to using the terms they will recognize.

Website-based metrics are not particularly helpful in determining the quality of the readers’ interactions. A good time-on-page value, for example, might be short–to the point of appearing to be a bounce, or it might be long. The number of times a page is viewed  also has an ambiguous meaning when it comes to understanding the quality of the readers’ interactions.

At the same time, the readers’ engagement and focus on their primary task (the one that sent them to this content) means asking qualitative information about their experience is likely to be seen as a distraction. Asking about the reader’s experience should be done soon after the interaction and with as brief of a satisfaction questionnaire as possible—perhaps only one question, such as “Did this topic help you?”


[1] Brandt, J., Guo, P.J., Lewenstein, J., Dontcheva, M., Klemmer, S.R.: Two Studies of Op-portunistic Programming: Interleaving Web Foraging, Learning, and Writing code. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 1589–1598 (2009)

[2] Redish, J.: Letting Go of the Words: Writing Web Content that Works (2nd ed.). Morgan Kaufmann, Elsevier, Waltham, MA (2012)

Reader goals – Reading to do here

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information WebsitesSee the entire series

Reading to Do Here

Reading to accomplish a task in the website, or Reading to Do Here, is characterized by readers interacting with a page in a website to accomplish a specific task through the site. In this interaction, the readers’ goal is to complete the task in the web site. Some familiar examples might include registering for a library account, subscribing to an online newsletter, or renewing a business license.

Readers interact with the content or the site shortly after they decide to accomplish the task and they leave shortly after they finish. The figure illustrates the interaction in such a task.

 

Reading to Do Here
Reading to Do Here

Readers who use content in this way will want to find the page to help them accomplish the task as quickly as possible and then complete the task as efficiently as possible. While they will want to know that they have successfully completed the task before they leave the website, after they leave the website, they generally won’t remember much about the experience unless it was especially negative or positive.

The figure shows a very common type of web interaction. Web usability guidelines describe the design implications that depend on the site, context, and audience in many texts. Because the readers’ task is performed almost entirely in the context of the web interaction, measuring the success of the interaction is easily accomplished through the site without imposing on the reader. The web server can collect data concerning the time spent in the interaction; the rate of successful operations (e.g., registrations, applications, or whatever the interaction is designed to accomplish); and the path through the interaction (e.g., back tracks, sidetracks, and early exits). Requests for qualitative feedback should occur soon after the interaction so readers’ remember the interaction. While this interaction model is intended for informational sites, it also matches the interaction model of commercial sites, such as shopping or other e-commerce sites. As such, many of the analytics tools and instruments that work in those contexts will also work in this interaction model.

What to measure?

Creative Commons licenseThat measuring API documentation is difficult is one of the things I’ve learned from writing developer docs for more than 11 years. Running the study for my dissertation gave me a detailed insight as to some of the reasons for this.

The first challenge to overcome is answering the question, “What do you want to measure?” A question that is followed immediately by, “…and under what conditions?” Valid and essential, but not simple, questions. Stepping back from that question, and a higher-level question comes into view, “What’s the goal?” …of the topic? …of the content set? and then back to the original question, of the measurement?

For my dissertation, I spent considerable effort scoping the experiment down to something manageable, measurable, and meaningful–ending up at the relevance decision. Clearly there is more to the API documentation experience than just deciding if a topic is relevant, but that’s a pivotal moment in the content experience. The relevance decision also seemed to be the most easily identifiable, discrete event that I could identify in the overall API reference topic experience. It’s a pivotal point in the experience, but by no mean the only one.

The processing model I used was based on the TRACE model presented by Rouet (2006). Similar cognitive-processing models were also identified in other API documentation and software development research papers. In this model, the experiment focuses on step 6.

The Task-based Relevance Assessment and Content Extraction (TRACE Model) of document processing Rouet, J.-F. (2006). The Skills of Document Use: From Text Comprehension to Web-Based Learning (1st ed.). Lawrence Erlbaum Associates.
The Task-based Relevance Assessment and Content Extraction (TRACE Model) of document processing
Rouet, J.-F. (2006). The Skills of Document Use: From Text Comprehension to Web-Based Learning (1st ed.). Lawrence Erlbaum Associates.

Even in this context, my experiment studies a very small part of the overall cognitive processing of a document and an even smaller part of the overall task of information gathering to solve a larger problem or to answer a specific question.

To wrap this up by returning to the original question, that is…what was the question?

  1. The goal of the topic is to provide information that can be easily accessible to the reader.
  2. The easily accessible goal is measured by the time it takes for the reader to identify whether the topic provides the information they seek or not.
  3. The experiment simulates the readers task by providing the test participants with programming scenarios in which to evaluate the topics
  4. The topics being studied are varied randomly to reduce order effects and bias and participants see only one version of the topics to bias their experience by seeing other variations.

In this experiment, other elements of the TRACE model are managed by or excluded from the task.

API reference topic study – thoughts

Last month, I published a summary of my dissertation study and I wanted to summarize some of the thoughts that the study results provoked. My first thought was that my experiment was broken. I had four distinctly different versions of each topic yet saw no significant difference between them in the time participants took to determine the relevance of the topic to the task scenario. Based on all the literature about how people read on the web and the importance of headings and in-page navigation cues in web documents, I expected to see at least some difference. But, no.

The other finding that surprised me was the average length of time that participants spent evaluating the topics. Whether the topic was relevant or not, participants reviewed a topic for an average of about 44 seconds before they decided its relevance. This was interesting for several reasons.

  1. In web time, 44 seconds is an eternity–long enough to read the topic completely, if not several times. Farhad Manjoo wrote a great article about how people read Slate articles online, which agrees with the widely-held notion that people don’t read online. However, API reference topics appear to be different than Slate articles and other web content, which is probably a good thing for both audiences.
  2. The average time spend reading a reference topic to determine its relevance in my study was the same whether the topic was relevant to the scenario or not. I would have expected them to be different–the non-relevant topics taking longer than the relevant ones on the assumption that readers would spend more time looking for an answer. But no. They seemed to take about 44 seconds to decide whether the topic would apply or not in both cases.

While, these findings are interesting, and bear further investigation, they point out the importance of readers’ contexts and tasks when considering page content and design. In this case, changing one aspect of a document’s design can improve one metric (e.g. information details and decision speed) at the cost of degrading others (credibility and appearance).

The challenges then become:

  1. Finding ways to understand the audience and their tasks better to know what’s important to them
  2. Finding ways to measure the success of the content in helping accomplishing those tasks

I’m taking a stab at those in the paper I’ll be presenting at the HCII 2015 conference, next month.

Checklist for technical writing

Devin Hunt's Design hierarchy
Devin Hunt’s design hierarchy

Devin Hunt posted this figure from “Universal Principles of Design,” which is an adaptation of Maslow’s Hierarchy of Needs for design.  It seemed like they could also apply to technical writing. Working up from the bottom…

Functionality

As with a product, technical content must work. The challenge is knowing what that actually means and how to measure it. Unfortunately, for a lot of content, this is fuzzy. I’m presenting a paper next month that should help provide a framework for defining this, but, as with Maslow’s triangle, you must do this before you can hope to accomplish the rest.

For technical content, like any product, you must know your audience’s needs to know what works means. At the very least, the content should support the user’s usage scenarios, such as getting started or onboarding, learning common use cases, having reference information to support infrequent, but important, usage or application questions. What this looks like is specific to the documentation and product.

Reliability

Once you know what works means, then you can tell if it does and determine if it does so consistently. Again, this requires knowledge of the audience–not unlike product design.

This is tough to differentiate from functionality, except that it has the dimension of providing the functionality over time. Measuring this is a matter of tracking the functionality metrics over time.

Usability

Once you know what content that works looks like, you can make sure it does so consistently and does so in a way that is as effortless as possible.

Separating usability from functionality is a tough one  in the content case. If content is not usable, does it provide functionality? If you look close, you could separate them out. For example, a content set can have all the elements that a user requires but they can be difficult to find or navigate. Likewise, the content might all exist, but be accessible in a way that is inconvenient or disruptive to the user. As with product development, understanding the audience is essential, as is user testing to evaluate this.

Proficiency

Can readers become expert at using the documentation? One could ask if they should become experts, but in the case of a complex product that has a diverse set of features and capabilities, it’s not too hard to imagine having a correspondingly large set of documentation to help users develop expertise.

What does this look like in documentation? At the very least, the terms used the documentation should correspond to the audience’s vocabulary to facilitate searching for new topics.

Creativity

Not every product supports creativity, nor does every documentation set. However, those that do make the user feel empowered and are delightful to use. A noble, albeit difficult, goal to achieve, but something worthy of consideration.

This might take the form of community engagement in forums, or ongoing updates and tips to increase the value of the documentation and the product to the audience.