Unqualified best practices are just slogans

I had the pleasure of joining Tom Johnson in another podcast and one of the topics we touched on was that of so-called best practices. Today, I stumbled across this post in a thread about high-tech job interviews:

On a personal note, it was actually a series of such experiences that convinced me to take my current job in academia.

One of the replies linked to this post: Best practices considered harmfull [sic] which summed it up as “Work out what your best practice is, work out how you can improve yourself.”

Unfortunately, by the time I got to this point in my feed, my blog reflex had been triggered, and here we are.

If we have to find our own best practices, what’s the point of having best practices?

Good question.

Continue reading “Unqualified best practices are just slogans”

Best practice…for you?

Last week, I saw a post in LinkedIn about a “new” finding (from 2012) that “New research shows correlation between difficult to read fonts and content recall.” First, kudos for not confusing correlation and causation (although, the study was experimental and did prove a causal relationship), but the source article demonstrates an example of inappropriate generalization. To the point of this post, it also underscores the context-sensitive nature of content and how similar advice and best-practices should be tested in each specific context.

Hard to read, easy to recall?

The LinkedIn post refers to an article in the March 2012 issue of the Harvard Business Review. The HBR article starts out overgeneralizing by summarizing the finding of a small experiment as, “People recall what they’ve read better when it’s printed in smaller, less legible type.” This research was also picked up by Malcolm Gladwell’s David and Goliath, which has the effect of making it almost as true as the law of gravity.

Towards the end of the HBR article, the researcher tries to rein in the overgeneralizations by saying (emphasis mine), “Much of our research was done at a high-performing high school…It’s not clear how generalizable our findings are to low-performing schools or unmotivated students. …or perhaps people who are not even students? Again, kudos for trying. Further complicating the finding stated by the HBR article is that the study’s findings have not been reliably replicated in subsequent studies, other populations, or larger groups. I’m not discounting the researcher’s efforts, in fact, I agree with his observation that the conclusions don’t seem to be generalizable beyond the experiment’s scope.

Context is a high-order bit

All this reinforces the notion that when studying content and communication, context is a high-order bit1. As a high-order bit, ignoring it can have profound implications on the results. Any “best practice” or otherwise generalized advice should not be considered without including its contexts: the context in which it was derived and the context into which it will be applied.

This also reinforces the need to design content for testing–and to then test and analyze it.



1. In binary numbers, a high-order bit influences the result more than any and all of the other lower-order bits put together.

Studies show…

folding-map-360382_640In the quest to do more with less, one method I’ve seen used to get the job done more quickly is to rely on best practices and studies. It’s not that referring to best practices or studies is bad, but they should be the starting point for decisions, not the final word. Why?

Your context is unique

This was made obvious in my dissertation study in which the effect seen by applying best practices or not depended on what was being measured (i.e. what mattered). In the API reference topics I studied, whether using headings and design elements as suggested by the prevailing best practices or not made no difference in reading performance, but they made a significant difference in how the topics were perceived by the reader.

Those results applied to the context of my experiment and they might apply to other, similar contexts, but you’d have to test them to know for sure. Does it matter? You tell me. That, too, depends on the context.

A study showed…

A report on the effect that design variations had on a news site home page came out recently to show how a modern interface had better engagement than a more traditional, image+text interface. However, reader of the latter interface had better comprehension of the articles presented.

Since it relates design, comprehension, and engagement, I thought it was quite interesting. I skimmed the actual study, which seemed reasonable. I’m preparing myself, however, for what the provocative nature of the headline in the blog article is likely to produce.–the time when it will be used in the inevitable “studies show…” argument. It has all the components of great “studies show” ammo: it refers to modern design (timely), has mixed results (so you can quote the result that suits the argument), it has a catchy headline (so you don’t need to read the article or the report).

Remember, your context is unique

Starting from other studies and “best practices” is great. But, because your context is, invariably, unique, you’ll still need to test and validate.

When it comes to best practices and applying other studies, trust but verify.