Long or short (posts)?

As one of my blog goals, I limited each post to no more than 500 words. My intention was to make them easy to read and write. They seemed like reasonable goals at the time. Since then, however, I’ve read some posts that suggest this might be counter productive. Some of the advantages of long form posts I gleaned from what I read (cited below) include:

  • Long form posts rank higher in search results
  • Long form posts are shared more often
  • Long form posts are seen as more professional

The data

According to one site, I picked exactly the worst topic length possible for my goal. That site shows a topic length of 500-800 as being the least shared. Longer content is analytic, shorter content is snappy, and 500-800 words is, well, just not very successful, socially.


Like so many decisions, it made sense at the time. In light of the new data, I need to reconsider that goal.

Go shorter

That’s one option. For Twitter, that seems reasonable (and technically enforced). For a blog, that seems somewhat vacuous (I suppose I should confirm this perception with some data, first). In the meantime, I’m not ready to adopt that goal for my blog.

Go longer

I resisted longer for a couple of reasons:

  1. Longer form takes longer to write. I was originally looking for quantity over volume as a way to get into the habit of writing frequently. Perhaps I’m past that and ready to move up to longer topics.
  2. Longer form takes longer to read. I was shooting for a 2-minute read, but the data say that audiences want more, or less. Who am I to tell the audience they’re mistaken?


I think I’ll compromise and update my goal to fewer, longer posts.

But this one, however, will be short.


Coming back to the blog

I’ve not done so well with my “four posts/month” blogging goal, lately. I finally pushed out my post on Giving criticism after it sat in my draft folder for the past month and a half. But, I’m back to talk about what I’ve been up to.


My paper with Jan Spyridakis on the documentation feedback tool we developed for my dissertation experiment was due last week so we were working on that throughout April. I’m looking forward to presenting it this September at SIGDOC 2016. I think this tool has a lot of potential to help writers learn more about how readers are interacting with their content and I’m looking forward to hear about other applications that people will see for it.


Jan & I had our paper on the benefits (and the necessity) of testing best practices in context accepted to the PCS 2016 conference in October. This paper reviews the benefits and perils of applying best practices and examines how best practices and their application can be improved to everyone’s benefit. The paper was also due towards the end of April. While I haven’t been blogging, Jan & I were very busy whipping these two papers into shape.

Mercer University

I accepted a faculty position in Mercer University’s Technical Communication department. Mercer is located in the state of Georgia and I’ll be teaching at the Macon campus. This will be a big change in career, residence, climate, industry, and pretty much any other aspect you can imagine. After visiting Macon, Mercer, and the TC department, I was hooked. I’m looking forward to the new adventure that starts this fall.

Actually, the adventure started in April, as well, as my wife and I began scoping the logistics of the move. Exciting times.


The 2016 STC Summit starts next week. I submitted a presentation proposal that was not accepted. That was a little disappointing, but given all that I have on my list of things to do this month, it was a blessing in disguise. The silver lining is that my presentation will be a webinar this December. See more about the webinar at: Plan Your API Documentation by Understanding the Audience, Market, and Product.

Giving criticism

It’s both easier and harder than you think, but here a few points to make it constructive.

Wait ’til your asked

Sure, you can always offer unsolicited feedback, but by waiting until you’re asked gives you the advantage of knowing what they want and that they are ready to receive it. Someone who hasn’t asked for feedback is probably not ready to hear it.

If, for some reason, you feel compelled to offer unsolicited feedback, make sure that you’re doing it for their benefit and not yours.

Understand the context

Even if you’ve been asked, understand whether someone is asking for feedback or affirmation. I tend to ask this directly, which comes of as abrupt–especially when many people don’t know for sure. A smoother way is to gains some context by asking more about the project to understand their motivations for asking.

Providing a critique when someone is seeking affirmation stings, no matter how kind and constructively you are. At the same time, an affirmation seems shallow when someone is seeking constructive criticism.

Know the goal

“What do you think?” is a common way to ask for feedback. If someone asks you this, ask them to be more specific. To give effective feedback, it helps to know how they plan to use it and how much they can change as a result. The best feedback is something that can be applied.

Be constructive and specific

If you see something that you think could be improved, be specific. It takes more time to consider and articulate, but is much more informative than vague observations and opinions.

Cite your sources

If your feedback is based on research, such as recent customer feedback, survey data, or some other research, cite it! Maybe you read or learned something the presenter hasn’t (or vice versa!).

Feelings come from you

Sometimes, you’ll see something that you can’t articulate. There are two options to take, in that case.

  1. You can wait until you figure it out.
    Sometimes it just takes time to bring your thoughts and feelings together into a coherent sentence. In that case, wait.
  2. Other times expressing how you feel is the whole point of the exercise, but speak for yourself. Unless you’ve surveyed the audience, don’t speak for them. “This makes me feel good.” or “I like how you’ve combined the text with the image.” are perfectly fine. “It looks annoying.” really means that you find it annoying, which might bear pursuing, but it is framed in the sense of “everyone will find it annoying,” which is not the case (unless, you have some research on the subject, in which case, see the previous point on sources).

If done right, providing feedback and criticism can be a win-win interaction.

Asking for criticism

Today’s Twitter gem was a Medium post from Mike Monteiro about the place for politeness in criticism–basically saying that politeness has to place in a design critique. Perhaps, but I think respect certainly has a place.

I agree with his premise that it’s “Better to get your nose bloodied in a critique of your peers, than to be slaughtered in a client’s conference room.” I disagree that a bloody nose is necessary “in a critique of [by] your peers.”

I’ll admit that I’ve delivered some of the aforementioned bloody noses–a practice that I’m working hard to reform. And, I’ve received a few, as well. In every case, the bloody nose experience wasn’t necessary, wasn’t constructive, and, in variably, was the result of just doing it wrong.

If it hurts, you’re doing it wrong, or you’re doing the wrong thing (or, you’re just out of shape).

So, how do you get constructive and effective criticism? He mentions some steps in his article that I think they bear repeating.

Get it early and get it often

Waiting until the last minute to get criticism is almost always asking for trouble. First, it’s unlikely that the designer will have time to apply any of the suggestions, so they will just be frustrating at best, and demoralizing at worst. A waste of everyone’s time, in any event, and contributing to the embarrassing experience in front of the client that he describes in the article.

It’s more constructive and effective to get frequent, small-scale, actionable feedback throughout the process, than to wait. This is not an “either or” choice, but a continuum. Nevertheless, lean towards the more frequent end of the spectrum, whenever you can.

Yes, we’re busy, but what goes around, comes around, and we’re all in this together.

Know (and state) the design scope and goal

The goal of the design might not be obvious. Likewise the scope of your involvement (and span of control) might not be obvious. To keep the criticism focused, keep the goals and scope of the design project visible.

Start by saying, for example, “I’d like you to review my redesign of the [xyz] home page to make it more accessible to an older audience. The changes include making the type easier to read, the call to action more visible, and to clarify the client’s value proposition. I’d like you to help me find aspects of the design that could be improved to meet those goals, better.”

In that, you’ve taken 60 seconds to focus the review.

The quality of answers is proportional to the quality of the questions.

If you find that you’re not getting the feedback you want, maybe you didn’t ask for the feedback you wanted. Don’t assume that everyone reviewing your work will know the goals of your design.

Thank them

A friend told me that “Feedback is a gift.” So, like anytime you receive a gift from someone, thank them!

Next up, how to give a helpful and respectful critique.

Time to get writing…

WritingWithQuillI suppose that’s true most of the time. However, last week, I received acceptance notices from the SIGDOC 2016 and the IEEE PROCOMM 2016 conferences for my paper proposals.

These are two of my favorite conferences. They’re big enough to attract a diverse collection of speakers and topics, while not so big that you can’t meet and talk with everyone.

Their venues are also interesting: SIGDOC will be just outside of Washington D.C., so I’m going to have to hop on the Metro and visit the sights.  PROCOMM 2016 will be in Austin, TX. That’ll be a first for me. I’ve been to many towns in Texas, over the years, but not Austin.


At SIGDOC, I’ll present a look at one of the remote user-research tools I developed for my dissertation research to collect targeted feedback about readers’ experiences with a document. The goals of readers of informational content often fall outside of the web experience. This makes it difficult to know how well your content helped readers accomplish their goals through their web experience alone, let alone, know what part of a large document they found helpful. The tool I’ll be presenting provides more detailed information than the traditional “thumbs-up/down” or one-to-five” feedback responses that are seen on many websites, while being less intrusive than the typical online questionnaire.


I’ll focus on the findings from my dissertation research that show best practices don’t always produce the best results, unfortunately. While it’s good (if not vital) to know the best practices, it’s equally (if not more) important to know what each practice is best at. It’s also vital to know what your audience finds important so that you can give them what they think is best.

All this is obvious, right? Context is king! Yet, without testing, and validation, you’ll never know whether you’re delivering what the audience wants and needs. To that end, testing needs to become a best practice that’s practiced more often. My presentation will talk about some of the obstacles to this and how we can overcome them. It’s not easy, but it’s not as hard as it might sound.

Agile documentation in my email

Just shortly after writing a few posts about Agile documentation, I received an invitation to this seminar in Atlanta…

Applying Agile Project Management to Information Development

I don’t know if they have some search-engine magic going on (like Amazon) or (more reasonably) it was just a coincidence in that they were announcing their seminar a couple of months before the event.

It’s great to see that Agile project management is getting some attention in the documentation realm. At the same time, it’s important to understand that Agile isn’t something you can just bring back from a seminar or learn from a blog post.

When I first jumped into an Agile development shop as a tech writer with a solid background in plan-driven project management (a.k.a. Waterfall), I talked to a few Agile coaches and consultants. They all said a 5-year time frame was a reasonable expectation to move from a plan-driven to change-driven methodology. It might happen more quickly, or it might never happen, no matter how hard you try, because requires changing how many people not only view projects, but, literally, think about the world. That was 5 years of constant coaching and encouragement, by the way.

But, in any case, it’s good to see it’s getting some attention. You can’t get to the end of a 5-year transition if you never start.

Agile documentation in the blogosphere

This is a tough topic to research. Using the words “agile” and “documentation” in the same search query returns a mixed bag of results. Some citing that the Agile Manifesto‘s goal of “Working software over comprehensive documentation” to mean that Agile products need little to no documentation. A more realistic and practical interpretation of that line is to treat documentation as any other component of the project.

A project or product should not have features that do not add value. It should not have code that doesn’t add value. Nor should it have documentation that doesn’t add value.

I’ll go along with that, but when it comes to reading what others have to say about technical writing, searching for “agile technical writing” returned this collection (in no particular order):

That’s quite a list and a range of perspectives. They range in age from 2 to 12 years old, with an average of 6.3 years.

Some early observations:

  • The earlier ones seemed to explain Agile more than the more recent articles.
  • They all offered tips for writers in one form or another
  • About half were written as first-person experience reports and the other half in more of a third-person, instructional format.
  • Most were written in a narrative format with a few as just bullets or bullets with some details.
  • Only a few cited additional references (although most had embedded links to related topics)

I’m not sure what conclusions to draw from this, yet. For perspective, I wanted to do a similar survey of academic literature, but searching for “Agile technical writing” didn’t produce much to read. Hmmmm….

Agile documentation in the real world

In Agile documentation in practice I described what worked for me, which works, when it works, but it’s not without its challenges. However, if you can get these right, the process should be smoother.


First and foremost, it works best when everyone is on the same page. When stakeholders have different ideas of quality, content, scheduling, iterations, or priority, things become difficult. This should come as no surprise, but it’s worth repeating because without a common understanding of the ground rules by all the stakeholders, those disagreements complicate all of the other areas.


In Agile documentation in practice, I mentioned how the traditional writing process was not what worked for me in an Agile shop. When I matched my writing process to the coding process as much as possible, things became easier. Once I was able to plan and describe my process in terms that made sense to agile software developers—something that I couldn’t do until I embraced the process for myself—it was easier for them to understand what I was doing (and when I was doing it).  Reporting progress clearly and frequently is an essential part of a successful process (Agile or otherwise).


A big part of the process consists of the tools used to execute the process. Agile is designed for changes. Changes in requirements. Changes in features (content, for documentation). Changes in audience. To name a few. The tools used to manage the workload and the content must accommodate the changes you want to accommodate as easily as possible.

Which tools are best? That depends on your process (and is worthy of a topic or two in itself). Ideally, the process will define the tools, not the other way around. In reality, however, this is often difficult to apply in practice. At least, with a clear process, you’ll know what your tools will need to do.


Collecting and analyzing data must be integral to the process. Data such as audience requirements, product road map, market climate, at the high-level and direct customer feedback, usage and satisfaction metrics, and production metrics at the detailed end, to name a few. Without a constant calibration, the writing process becomes increasingly disconnected from reality—and remember, reality is a moving target, which is why you need to keep your data up-to-date.


If you’re an Agile shop, this shouldn’t be a problem you can’t handle because the sprint planning process takes this into account. Each sprint has its velocity (capacity) for development tasks and it will also have its velocity for technical writing. This always begs the question:

What is the right ratio of developers to writers?

It depends, of course. But if you have all of the above, you might not have enough writers to produce everything you want to deliver, but you’ll have the process to do the best you can with what you have and the data to know if you need more or fewer.

Agile documentation in practice

The last  couple of posts seem to state the obvious. They are , basically, variations of know your reader/customer. So, how hard could it be?

In my experience, there’s never time to do all of what is suggested. As a professional writer, the “topics-to-write” list has always been longer than the time available to write them—often by an order of magnitude.

Exactly the situation Agile was designed for.

I would think that writers would flock to Agile, if it’ll solve the problem of “so many docs, yet so little time.” And, they might, if given the chance.

Yet, in an unscientific study of documentation planning methodologies, they all look like they are based on a plan-driven (a.k.a. waterfall) project management model:

Plan, Write, Revise, Edit, Publish

…as though publish was something final. In Agile documentation, that might describe the process to publish a single topic, but it doesn’t scale up. If we call this sequence write-a-topic, in the larger scale, the documentation process looks more like:

Plan, Write-a-topic, Review, Repeat

Stopping every so often to step back and look at the view (i.e. take care of more strategic planning).

What works for me

I try to organize my documentation as much like the software development process as possible, but it took some de-programming to break the waterfall method into which I’d been indoctrinated.

It works best when I can define my documentation projects as epics, sprints, stories.

An epic is a large-scale project or functionality that spans multiple sprints and consists of many stories. For example, produce the documentation for a new feature. While the developers design the feature, I’m working with them to design the documentation—keeping the Agile methodology in mind. I  won’t be able to write it all at once, so I work with the product owner and the development team to figure out what functionality will be available when (in sequence, if not by date), what topics the customer will need and in what order (they can’t read it all at once anymore than I can write it all at once).

With that information, I define the stories and prioritize them with the product owner such that if I have to stop at any point along the way (e.g. to start on the next project), I’ll know that the most valuable topics have been written and the rest can wait (or they really weren’t as valuable or important as I thought they were).

The sprint planning and reviews keep the priorities in focus, but that often means that the epics are not done linearly. One documentation epic was about 2 months of work, but took 4-5 months to complete because of intervening priorities. Fortunately, I designed the content such that it was always publishable (shippable, in Agile terms) and the overall content set just kept getting better as I wrote it. Eventually, it was finished—as was all the intervening, higher-value content.

As Hannibal of the A-Team would say, “I love it when a plan comes together.”

Agile documentation customers

I continue the reflection I started in my last post, reviewing Scott Ambler’s article from 2002, Can Documentation Be Agile?

To no surprise, the customer emerged as being a critical element in Agile documentation. The points in Scott’s article refer to the Customer as though the customer is singular. Users, the user, etc. are also used similarly. In most cases, however, (i.e. with any luck) there is rarely only one customer. So, how can we understand all those people?

Find the center of mass

In physics, large bodies are modeled around a center of gravity or center of mass. This notion facilitates the study of very large objects, planet- or galaxy-sized objects in astrophysics, by modeling them as a single point with the mass of the entire object.

As with planets and galaxies, customers can also be modeled in a similar fashion. Many customers can be (and are frequently) modeled as a few personas or market segments. This is common in fields that deal with large numbers of people, such as marketing and advertising. In these cases populations are described by what they have in common and treated as a single persona. Does that mean every person in a segment’s population is the same? Of course not. But a well designed persona or segment describes the common attributes of the group sufficiently for its intended use. Just like knowing the location of an object’s center of mass will let you understand its properties and behaviors when acted upon by external forces.

One drawback to this method, is that when you miscalculate an object’s center of mass, it won’t behave as you expect it should. Its behavior will be consistent with its actual center of mass (whether you know where that is, or not). Where I’ve seen personas and segments go awry in technical writing is, inevitably, when they were not applied or not defined well.

Base personas on research

Just as an object’s center of mass is found through measurement, a population’s or segment’s properties are also determined through measurements and analysis (a.k.a. research).

If you want to treat a population as a single entity, you must measure it precisely.

But measuring customers is not easy. So what’s a writer to do?


Successive approximation is a method for improving and refining a measurement and is an important part of Agile. Measure, test, observe, refine, repeat (or pick your favorite method for this). With this approach, you can continuously improve your customer model such that you can treat the customer as a single entity. There are several ways to accomplish successive approximation methods, but the key factor is that you assume that you must evaluate and iterate and that these aspects are baked into every process.

So, one of the first steps toward having agile documentation is having the ability to iterate (cheaply and quickly) and the ability to measure and understand your customers.

But, we all know that. Now, to figure out how to make that part of the process.