Videos in technical communication

WinDevVideoThe subject of videos frequently comes up in conversations about technical communication, even in when talking about API documentation. On the one hand, they can add some zing to your technical content (and what technical content can’t use a boost in the zing department). On the other hand, they can produce a negligible, or even negative, return on the cost they took to produce.

Video genres

To make this discussion more concrete, I consider these genres:

  • How-to videos
  • Can-do videos
  • Meet-me videos

How-to videos

These videos describe, and usually demonstrate, a task. Ideally, they describe the end-state (goal), beginning state (you and your problem), and the steps required to take the viewer from beginning to end.

The video on how to repair eyeglasses is still my favorite example of this genre.

Depending on the nature of the task, the viewers’ goals are often reading (viewing) to do now or a reading (viewing) to do a task later.

Can-do video

Many products, software and APIs included, have more features than meet the eye, or more applications than the viewer might realize. The line between promotional and educational can-do videos is fuzzy. While technically user education, they can seem promotional because they are promoting a capability of the product. The difference is in the tone and the call-to-action.

Until I produced a video about a Microsoft API, I didn’t think it was possible to make a compelling video about an API, especially one with no user interface. It turns out that it is possible. It just takes imagination (and a lot of effort).

The viewers’ goals for these videos is, invariably, reading (viewing) to do a task later, if only because they didn’t know they could do it before or during the video.

Meet-me videos

These are the videos in which a member of the product or development team provides a behind-the-scenes look of the product, its development, or its manufacture. When they provide information about the internal architecture and implementation that can be applied by the viewer, they can be interesting and educational.

There are many bad examples of this genre, which makes the stand out ones really stand out. The TED talk by Tony Fadell, is a great, meet-me video because in it, we learn more about him in a way that is very accessible (as in, you could do what I do, too).

The viewers’ goal for these videos is often reading (viewing) to learn, because, unlike how-to and can-do videos, it’s hard for the viewer to know, in advance, what the video will deliver in terms of knowledge or entertainment.

Which one to choose?

As always, it depends.  I think the Audience-product-market framework applies to videos as it does to any other type of content. If you understand what resonates with the audience and how to present that in the prevailing market, the best type of video for that situation should be clear. Understanding the readers’ goals, of course helps bring the answer into focus.

Audience, Market, Product

In a podcast-interview I did with Tom Johnson, I mentioned this framework as a way to evaluate technical documentation requirements. The components of audience, market, and product aren’t anything new, nor is considering them in documentation planning. What’s been missing, however, is an effective way to understand them in a way that informs documentation.

This framework is my latest iteration on how to apply the 12 cognitive dimensions of API usability to technical documentation. These dimensions, by themselves, are very difficult to apply for various reasons, but I think the notion of identifying the components and elements of an interaction can be useful—but the method must be usable. So, I’ve taken a step back from the level of detail the 12 dimensions to these three.

In this framework, it’s essential to consider, not just the documentation, but the entire customer experience in which the documentation will reside to correctly assess the requirements. I’m still thinking out loud because I think that there’s some value in lingering on the question(s) before diving into the solution process.

So to review the framework’s components…

Audience

These are the people who will (or who you expect to) read the content. Content includes anything you write for someone else to read. The boundaries of this depend on a lot of local variables, but should include all the content of the entire customer experience. Your audience might be segmented into groups such as  business/purchase decision makers, direct users, indirect users, support, development. You should know how they all interact with the entire customer experience.

Market

For this analysis, the market is the space in which your company or product is acquired. It could be an open-source product that offers a service or benefit similar to others. It could be downloaded from an app store. It could be something sold door-to-door. How your product appears in the space it shares with other similar products influences your content priorities. The more you know about the relationship between your product, its competitors, and its customers, the better you can assess those influences.

Product

Finally, there’s the product itself. How does it work? What does it do? How is it designed? What are its key features and benefits to the customer? What are its challenges? Knowing how the product’s features interact with the customer (i.e. audience) has a significant influence on the documentation.

And so…

And so, that’s where it begins. I’m still formulating the questions, and I think the questions are the key to bringing this down from a theoretical notion to something that can be applied by practitioners.

It all starts with knowledge (as opposed to assumption and conjecture) and that usually comes from research. With regard to research, I found these articles to be interesting:

Next, I’ll look at the questions that are specific to each component.

Best practice…for you?

Last week, I saw a post in LinkedIn about a “new” finding (from 2012) that “New research shows correlation between difficult to read fonts and content recall.” First, kudos for not confusing correlation and causation (although, the study was experimental and did prove a causal relationship), but the source article demonstrates an example of inappropriate generalization. To the point of this post, it also underscores the context-sensitive nature of content and how similar advice and best-practices should be tested in each specific context.

Hard to read, easy to recall?

The LinkedIn post refers to an article in the March 2012 issue of the Harvard Business Review. The HBR article starts out overgeneralizing by summarizing the finding of a small experiment as, “People recall what they’ve read better when it’s printed in smaller, less legible type.” This research was also picked up by Malcolm Gladwell’s David and Goliath, which has the effect of making it almost as true as the law of gravity.

Towards the end of the HBR article, the researcher tries to rein in the overgeneralizations by saying (emphasis mine), “Much of our research was done at a high-performing high school…It’s not clear how generalizable our findings are to low-performing schools or unmotivated students. …or perhaps people who are not even students? Again, kudos for trying. Further complicating the finding stated by the HBR article is that the study’s findings have not been reliably replicated in subsequent studies, other populations, or larger groups. I’m not discounting the researcher’s efforts, in fact, I agree with his observation that the conclusions don’t seem to be generalizable beyond the experiment’s scope.

Context is a high-order bit

All this reinforces the notion that when studying content and communication, context is a high-order bit1. As a high-order bit, ignoring it can have profound implications on the results. Any “best practice” or otherwise generalized advice should not be considered without including its contexts: the context in which it was derived and the context into which it will be applied.

This also reinforces the need to design content for testing–and to then test and analyze it.



1. In binary numbers, a high-order bit influences the result more than any and all of the other lower-order bits put together.

Studies show…

folding-map-360382_640In the quest to do more with less, one method I’ve seen used to get the job done more quickly is to rely on best practices and studies. It’s not that referring to best practices or studies is bad, but they should be the starting point for decisions, not the final word. Why?

Your context is unique

This was made obvious in my dissertation study in which the effect seen by applying best practices or not depended on what was being measured (i.e. what mattered). In the API reference topics I studied, whether using headings and design elements as suggested by the prevailing best practices or not made no difference in reading performance, but they made a significant difference in how the topics were perceived by the reader.

Those results applied to the context of my experiment and they might apply to other, similar contexts, but you’d have to test them to know for sure. Does it matter? You tell me. That, too, depends on the context.

A study showed…

A report on the effect that design variations had on a news site home page came out recently to show how a modern interface had better engagement than a more traditional, image+text interface. However, reader of the latter interface had better comprehension of the articles presented.

Since it relates design, comprehension, and engagement, I thought it was quite interesting. I skimmed the actual study, which seemed reasonable. I’m preparing myself, however, for what the provocative nature of the headline in the blog article is likely to produce.–the time when it will be used in the inevitable “studies show…” argument. It has all the components of great “studies show” ammo: it refers to modern design (timely), has mixed results (so you can quote the result that suits the argument), it has a catchy headline (so you don’t need to read the article or the report).

Remember, your context is unique

Starting from other studies and “best practices” is great. But, because your context is, invariably, unique, you’ll still need to test and validate.

When it comes to best practices and applying other studies, trust but verify.

Reader goals – Overview

readingbookThe reader goals in this series describe the different reader and content interactions with informational content to help content developers and site managers provide the content that best accommodates their audiences’ goals. The underlying assumptions behind these interactions are twofold:

  • Readers come to informational content with some goal in mind.
  • The readers’ goals might not coincide with their content experience.

Understanding readers’ goals is critical to providing the content and experience that helps readers achieve their goals.  Understanding the relationship between reader goals and content also informs the feedback that the reader can provide and the best time to collect that feedback.

Informational content

The interactions described in this series are based on (and primarily intended for) informational content that has no specific commercial goal. These interactions might also apply to sites with commercial goals; however, they were developed for sites with no specific commercial goal.

Audience analysis

The reader-content relationships described in this series are best discovered through audience research.  Having these models in mind can help inform and direct your research. These models can also help identify and characterize the patterns you observe in your audience research.

At the same time, as a writer, I realize that it isn’t always possible to conduct audience research before the content is required. In those cases, these models provide a reasonable starting point from which to later collect data that can be used to refine your model and content. By taking what you know about your audience, you can select an interaction model that fits what you know and use that model as the basis for your initial draft of the content and its metrics.

Feedback and metrics

A key part of these interactions is to help identify what type of feedback can be collected and the best time to collect it. From the readers’ perspectives, the content the means to accomplish a goal, therefore, goal-related feedback is the most representative measure of content effectiveness.

When readers’ goals coincide with completing the content, as in the Reading to do here case, collecting goal-related feedback at the end of the content makes perfect sense. However, we found that much of the content found in  informational sites has a different relationship with readers’ goals. Recognizing this and altering the feedback instruments to match can improve the quality of the feedback on the content.

Finally, the interaction descriptions in this series are somewhat vague when it comes to specific instrumentation and metrics. This is for two reasons. First, the best instrument to use is very context specific. Second, because of this, we haven’t studied this aspect enough to make general recommendations. However, we’re working on it.


This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reader goals – Reading to learn

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Learn (to use Later or to Apply with Other Information)

Reading to Learn to Use Later, without a particular or immediate task in mind, is similar to what Sticht described as Reading to Learn [1]. The critical distinctions between this and Reading to learn to do now and Reading to learn to do a task later are:

  • The reading task in your content is a subtask of the reader’s ultimate goal of learning a new concept or skill.
  • The reader does not have a specific goal beyond an increase in knowledge.

An example of reading that facilitates this type of reader goal, where the goals are accomplished after using the website or in conjunction with using other information, would be websites and books about design principles so as to use the information later when designing a website. The connection between the reading and the ultimate application of the knowledge is too distant to have a meaningful and identifiable cause-effect relationship.

In a Reading to Learn to Use Later goal, as shown in the figure, the reader reads information from many different locations, of which your content might be only one information source. While similar to the Reading to Be Reminded interaction type, in this interaction type, the reader is seeking more information from the content. In this interaction, the content is new and the reader might consult multiple sources to accumulate the information required to reach the learning goal.

Reading to learn
Reading to learn

It is difficult to measure how well readers are accomplishing their ultimate learning goal when their interaction with the website may be one step of many and they might not use the information until much later. However, it is reasonable to collect information about the immediate experience. For example, the content could encourage readers to interact with the page as a way to provide feedback to the reader and to collect information about the readers’ experience. The content could also include quizzes, and links or affordances such as prompts to share the content with a social network.


[1] Sticht, T.G., Fox, L.C., Hauke, R.N., Zapf, D.W.: The Role of Reading in the Navy. DTIC Document (1977)

Reader goals – Reading to do a task later

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Do a Task Outside the Website Later

When reading to do a task outside the website, readers’ accomplish their goals outside of the web and after they’ve read the content that provides the prerequisite learning for the task. Some examples of such interactions include:

  • The United States Internal Revenue Service’s instructions on filling out the Form 1040[1].
  • The Aircraft Owners and Pilots Association (AOPA)’s information about performing crosswind landings [2].
  • An article about how to do well in a job interview [3].

The figure shows the relationship between the readers’ interaction with the content and the task they hope to accomplish.

Reading to do a task outside the website later
Reading to do a task outside the website later

When authoring content for this type of interaction, the web usability goals include search-engine optimization. Term and vocabulary alignment is an important and easy way to make the content easy for the reader to discover.

Of course, providing meaningful, interesting, and helpful content is critical. In this type of interaction, understanding the nature and relationship of the content and the task are key elements towards getting meaningful feedback on how well your content is doing in those categories. Because this type of interaction consists of two, temporally-separate events–reading/leaning and doing–it might be more effective to assess them separately. For example, you could  include affordances in the content that test the intermediate goal of learning the content before the task is attempted and to consider using methods to collect and coordinate information about task completion.

Consider the case of a driver’s license test-preparation site. The site could include a quiz for the reader (and the site manager and stakeholders) to determine the readers’ learning and the content’s effectiveness in the short term. Perhaps also providing feedback to the reader on areas that require additional study. The task, passing the written driver’s license test in this example, would occur later and be measured at the Department of Licensing. The two experiences could be related somehow to evaluate the effectiveness of the test preparation site and the actual task of passing the license exam.

In this example, asking the reader about satisfaction could also be done during and after readers’ interaction with the content to understand how they feel about that, as long as the questions did not interfere with the learning task.


[1] http://www.irs.gov/pub/irs-pdf/i1040gi.pdf

[2] http://flighttraining.aopa.org/students/solo/skills/crosswind.html

[3] http://mashable.com/2013/01/19/guide-to-job-interview/

Reader goals – Reading to do now

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to Do a Task Outside the Website Now

Readers who interact with a website to do a task outside the website seek to complete their task while reading the website for the information they need. Examples of such websites include sites that describe how to repair a household appliance or how to cook a meal. The figure shows the interaction between the readers’ goal and their interaction with the content.

Reading to do a task outside of the website now
Reading to do a task outside of the website now

In this type of interaction, readers interact with the content after they decide to perform the task and end their interaction with the content when they feel confident enough to complete the task without additional information. At that point, they continue towards their goal without the content. Depending on the complexity and duration of the task, readers might return to the content several times during the task, but the key aspect of this interaction with the content is that it does not coincide with task completion.

This interaction can influence several aspects of the design. For example, readers might like to print the web content to save or refer to later, so it might be inconvenient to have the web content spread over several web pages. However, because readers might stop interacting with the content at any point, the content could be divided into individual pages of logical steps with natural breaks.

Because readers stop interacting with the content before the complete their task, asking for information about their task when they leave the content might be confusing because they haven’t finished it, yet. On the other hand, asking about the content might be reasonable.

Tracking progress, success, and satisfaction for this type of interaction requires coordination with the content design. The task and subtask flows must be modeled in the content’s design so that the instrumentation used to collect data about the interaction coordinates with the readers’ interaction. Because readers can leave the content before they read all of the content and still complete their task successfully, traditional web-based metrics such as average time-on-page and path are ambiguous with respect to the readers’ experiences. It is impossible, for example, to know if having readers exit a procedure on the first step is good or bad without knowing whether they are also dissatisfied or unsuccessful with the content. Ideally, collecting information about their experience will come shortly after they accomplish their goal. For example, posting a review of their recipe on social media after they finish.

Reader goals – Reading to be reminded

This series looks at the different site interactions that readers have with informational sites and is adapted from the full paper on the topic that I wrote with Jan Spyridakis and presented at HCII 2015: Using Readers’ and Organizations’ Goals to Guide Assessment of Success in Information Websites. See the entire series

Reading to be Reminded

Reading to be reminded, or Reading to Do Lite, occurs when readers visit an informational site with the confidence that they already know most of what they need to know about a topic to complete their task, but they just need a refresher. Readers use the website as a form of offline information storage that they may use either online or elsewhere. By knowing the information is available online, readers are confident that they don’t need to remember the details, just where they can find them. Brandt et al. [1] noticed this pattern while observing software developers who “delegated their memory to the Web, spending tens of seconds to remind themselves of syntactic details of a concept they new [sic] well.”

The figure shows how interactions of this type might relate to a reader’s task.

Reading to be reminded
Reading to be reminded

Because, as Redish [2] says, readers will read “until they’ve met their need,” readers will spend as little time in the site as they need interacting with the content. Once they have been reminded of the information they need, they will return to their original task.

Topic design principles needed to serve this interaction include making the content easy to find, navigate, and read. Visible headings and short “bites” and “snacks” of information [2] are well suited to such a goal. However, my research in developer documentation says that these guidelines depend on the specific context–a reminder to know your audience. Knowing your audience is also key to using the terms they will recognize.

Website-based metrics are not particularly helpful in determining the quality of the readers’ interactions. A good time-on-page value, for example, might be short–to the point of appearing to be a bounce, or it might be long. The number of times a page is viewed  also has an ambiguous meaning when it comes to understanding the quality of the readers’ interactions.

At the same time, the readers’ engagement and focus on their primary task (the one that sent them to this content) means asking qualitative information about their experience is likely to be seen as a distraction. Asking about the reader’s experience should be done soon after the interaction and with as brief of a satisfaction questionnaire as possible—perhaps only one question, such as “Did this topic help you?”


[1] Brandt, J., Guo, P.J., Lewenstein, J., Dontcheva, M., Klemmer, S.R.: Two Studies of Op-portunistic Programming: Interleaving Web Foraging, Learning, and Writing code. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 1589–1598 (2009)

[2] Redish, J.: Letting Go of the Words: Writing Web Content that Works (2nd ed.). Morgan Kaufmann, Elsevier, Waltham, MA (2012)