In my previous post, I presented some experiences with testing and the resulting epiphanies. In this post, I talk more about the process I applied.
The process is simple, yet that’s what makes it difficult. The key to success is to take it slow.
Start with something simple (and then simplify it). Your first questions will invariably be too big to answer all at once, so think, “baby steps.”
Instead of asking, “How can we improve our documents?” I asked, “What do users think of our table of contents (ToC)?” Most users don’t care about how we can improve our docs, unless they’re annoyingly bad, so they don’t give it much thought. They do, as we found out, use the ToC, and we learned that it wasn’t in a way that we could count.
Whoever you can get to sit with you. Try to ask people who are close to your target audience, if you can, but anyone who is not you or in your group is better than you when it comes to helping you learn things that will help you answer your question.
Listen with a curious mind. After coming up with an answerable question, this is the next hardest thing to do—especially if people are reviewing something that you had a hand in writing or making.
Your participants will invariably misinterpret things and miss the “obvious.” You’ll need to suffer through this without [too much] prompting or cringing. Just remind yourself those moments are where the learning and discovery happen (after the injuries to egos and knees heal, anyway).
When the participant asks for help, such as, “where’s the button or link to do ‘X’?” A trick I learned from more experienced usability testers is to ask them, “where do you think it should be?” That way you learn something about the user experience, rather than just finishing the task without learning anything. If they’re still stumped, you can help them along, but only after you’ve learned something. Remember, you’re there to learn.
Act, but don’t overreact. As with taking the baby steps of coming up with an answerable question, take baby steps when applying what you learn. Remember to stay within the limitations of your method. You’re testing the few people who are available to you and who might not represent your actual audience. Even if you can interview or observe several people, they are still somewhat cherry-picked or, more precisely, convenience sampled. So, tread carefully with your conclusions.
- Apply your findings.
- Ask another question.
- Test again.
- Repeat the process.
How this worked in the past
The preceding describes the process I applied to the stories in the previous post.
In the piClinic example, I had a lot of data confirming my understanding of the clinic-visit process (admission, diagnosis, discharge), but not much about the rest of the patient record life cycle. So, I worked on visiting a wider range of users and focused on learning more about how they worked in general–about the environment in which the system would be used. After conducting that research, I was able to recognize how adding reports would add the value to the users and that would make the effort of learning and using the piClinic worth their while.
In the ToC example, we were curious about how the ToC was used, but had no data about its use—just assumptions and conjecture. To learn more, I invited a new engineer to let me observe him as he navigated our documentation to learn the system. I didn’t have a large population (just one participant, to be honest), so I couldn’t conclude that everyone or honestly, anyone but this person, wouldn’t click on the ToC. But that wasn’t the goal. I didn’t want to prove we were right, I wanted to learn about this user’s experience. With that approach, we got a new view of using the ToC that we hadn’t considered: that it was used visually, not interactively. Based on this, we decided not to instrument the ToC because doing so wouldn’t produce the data that would tell us what we were hoping to learn. Even if some people clicked on the ToC entries, we wouldn’t be able to tell how much of the audience that represented and there were more productive research avenues to pursue.
In both of these cases, the cost to collect this data was minimal and the insights from the data helped improve the user experience.