Better to light a candle

My write up about the Hawaii False Alarm left me a little unsatisfied in that it really didn’t offer much in the way of actionable responses.  Fortunately, better minds than mine came to the rescue today.

What Healthcare.gov has to do with the Hawaii false alarm — and what to do about it showed up in my Twitter feed from Code for America and offered some more immediate actions for those who want to help. In that article, Erie Meyer suggests:

  1. Usability Test Your Most Important Services
  2. Adopt the Digital Services Playbook
  3. Get Help

Usability Test Your Most Important Services

While that might seems obvious to many, apparently enthusiastic government lawyers have been preventing usability testing in the past, according to an earlier post. Hopefully, the air is clearing on this.

Recently, our Tech Comm students at Mercer University have been usability testing the Department of Homeland Security’s website to help improve it. Every little bit helps.

Adopt the Digital Services Playbook

I hadn’t see this before, but the U.S. Digital Service’s Digital Services Playbook is an excellent and easy to ready resource for how to design a good service.

Everyone planning to launch a service or website should review it, whether they are a government organization or not. The U.S. Government has a lot of great resources for digital and content design (Honest!) for no charge, such as:

Get Help

Erie’s article concludes with a long list of resources for people tasked with procuring government systems. Designers and developers can use those same resources to find out where they can help out and put their expertise and passion to work.

Hidden in this list is the disappointing observation that, “Government contractors can do incredible work, but the work is typically not set up for success.” Yet she continues to offer some supportive advice, The key to getting the best out of a contract is to include usability testing and the digital service playbook in the statement of work, but also in breaking down the contract into the smallest pieces possible. 18F is doing groundbreaking work on this front and is happy to help(Emphasis mine).

Their problems are really our responsibility

In the end, it’s really up to all of us as taxpayers. Erie quotes this tweet:

I feel better knowing that there are so many ways to actually make the situation better.  Hopefully, by adopting these improvements, the improvements will get as much press (or even 1/10 the press) as the Hawaii False Alarm and it will become easier and easier to improve these vital services.

Hawaii’s alert was not the fault of a designer or operator

OK, enough with the unsolicited design updates to the Hawaii civil defense notification system, please! While the UI design of the operator’s interface could be improved, considerably, that’s not the root cause of the problem.

For those who have been in a cave for the past week, a false alarm sounded through Hawaii last weekend, which has attracted no shortage of design critique and righteous condemnation of an unquestionably atrocious user interface. The fires of that outrage only grew stronger when a screenshot of the interface was published showing the UI used (later, it turns out that this was NOT the actual UI, but a mockup).

Jared Spool describes how such an interface came into being. Prof. Robertson of the University of Hawaii also offered this detailed  outline of what likely happened. Since this post was first published, Vox posted an interview with a subject matter expert to explain how the system works in the national context.

The error was initially attributed to “Human (operator) error.” To which the UI design community (as represented on Twitter) pointed out that the system designers bear some (if not most) of the responsibility for the error by designing (or allowing) an interface that made such an error so easy.

The article linked in the NNgroup tweet observes that, “People should not be blamed for errors caused by poorly designed systems.” I hope the appropriate authorities consider this when they determine the fate and future career of the operator who made the fateful click.

While, NNgroup’s article is titled, “What the Erroneous Hawaiian Missile Alert Can Teach Us About Error Prevention,” I would argue that it is preaching to the choir and is not teaching their readers anything they don’t already know. The article describes a list of design patterns that can help make errors such as occurred in Hawaii less probable, usability 101 stuff, yet misses the elephant in the room–the overall system in which the UI exists. If context is important, and it is, the context of the system, not just that one interaction must be considered before identifying any errors to fix or placing any blame.

Here’s this armchair quarterback’s view.
Continue reading “Hawaii’s alert was not the fault of a designer or operator”

Giving criticism

It’s both easier and harder than you think, but here a few points to make it constructive.

Wait ’til your asked

Sure, you can always offer unsolicited feedback, but by waiting until you’re asked gives you the advantage of knowing what they want and that they are ready to receive it. Someone who hasn’t asked for feedback is probably not ready to hear it.

If, for some reason, you feel compelled to offer unsolicited feedback, make sure that you’re doing it for their benefit and not yours.

Understand the context

Even if you’ve been asked, understand whether someone is asking for feedback or affirmation. I tend to ask this directly, which comes of as abrupt–especially when many people don’t know for sure. A smoother way is to gains some context by asking more about the project to understand their motivations for asking.

Providing a critique when someone is seeking affirmation stings, no matter how kind and constructively you are. At the same time, an affirmation seems shallow when someone is seeking constructive criticism.

Know the goal

“What do you think?” is a common way to ask for feedback. If someone asks you this, ask them to be more specific. To give effective feedback, it helps to know how they plan to use it and how much they can change as a result. The best feedback is something that can be applied.

Be constructive and specific

If you see something that you think could be improved, be specific. It takes more time to consider and articulate, but is much more informative than vague observations and opinions.

Cite your sources

If your feedback is based on research, such as recent customer feedback, survey data, or some other research, cite it! Maybe you read or learned something the presenter hasn’t (or vice versa!).

Feelings come from you

Sometimes, you’ll see something that you can’t articulate. There are two options to take, in that case.

  1. You can wait until you figure it out.
    Sometimes it just takes time to bring your thoughts and feelings together into a coherent sentence. In that case, wait.
  2. Other times expressing how you feel is the whole point of the exercise, but speak for yourself. Unless you’ve surveyed the audience, don’t speak for them. “This makes me feel good.” or “I like how you’ve combined the text with the image.” are perfectly fine. “It looks annoying.” really means that you find it annoying, which might bear pursuing, but it is framed in the sense of “everyone will find it annoying,” which is not the case (unless, you have some research on the subject, in which case, see the previous point on sources).

If done right, providing feedback and criticism can be a win-win interaction.

Asking for criticism

Today’s Twitter gem was a Medium post from Mike Monteiro about the place for politeness in criticism–basically saying that politeness has to place in a design critique. Perhaps, but I think respect certainly has a place.

I agree with his premise that it’s “Better to get your nose bloodied in a critique of your peers, than to be slaughtered in a client’s conference room.” I disagree that a bloody nose is necessary “in a critique of [by] your peers.”

I’ll admit that I’ve delivered some of the aforementioned bloody noses–a practice that I’m working hard to reform. And, I’ve received a few, as well. In every case, the bloody nose experience wasn’t necessary, wasn’t constructive, and, in variably, was the result of just doing it wrong.

If it hurts, you’re doing it wrong, or you’re doing the wrong thing (or, you’re just out of shape).

So, how do you get constructive and effective criticism? He mentions some steps in his article that I think they bear repeating.

Get it early and get it often

Waiting until the last minute to get criticism is almost always asking for trouble. First, it’s unlikely that the designer will have time to apply any of the suggestions, so they will just be frustrating at best, and demoralizing at worst. A waste of everyone’s time, in any event, and contributing to the embarrassing experience in front of the client that he describes in the article.

It’s more constructive and effective to get frequent, small-scale, actionable feedback throughout the process, than to wait. This is not an “either or” choice, but a continuum. Nevertheless, lean towards the more frequent end of the spectrum, whenever you can.

Yes, we’re busy, but what goes around, comes around, and we’re all in this together.

Know (and state) the design scope and goal

The goal of the design might not be obvious. Likewise the scope of your involvement (and span of control) might not be obvious. To keep the criticism focused, keep the goals and scope of the design project visible.

Start by saying, for example, “I’d like you to review my redesign of the [xyz] home page to make it more accessible to an older audience. The changes include making the type easier to read, the call to action more visible, and to clarify the client’s value proposition. I’d like you to help me find aspects of the design that could be improved to meet those goals, better.”

In that, you’ve taken 60 seconds to focus the review.

The quality of answers is proportional to the quality of the questions.

If you find that you’re not getting the feedback you want, maybe you didn’t ask for the feedback you wanted. Don’t assume that everyone reviewing your work will know the goals of your design.

Thank them

A friend told me that “Feedback is a gift.” So, like anytime you receive a gift from someone, thank them!

Next up, how to give a helpful and respectful critique.

The post-hackathon hangover

Court-Whisperer-Screen-ShotHere it is, Wednesday, just four days after the hackathon and my post-hackathon hangover is finally abating. Don’t get me wrong, the hackathon was a great party. Like so many great parties, however, there’s often a morning after to get past.

Taking stock

I’ve already chronicled the experience sufficiently, so I’ll take a moment to indulge my post-hackathon hangover. This reminds me of my timer project. I’m seeing a pattern…

A demo is not a prototype

For as exciting as the hackathon was, the result was barely a protoype and, in all honesty, it was barely a demo. (It was a great pitch, though!) So, for all the accolades, there are a lot of problems that remain to be solved before the project is close to anything that could be deployed. Been there. Done that. Quite a few times, now.

That said (as the hangover fog begins to clear with the light of a new day…several days later), there were some valuable lessons to hang on to from the experience.

Less really is more

On top of our team’s focus and cooperation, I would attribute much of our success in the hackathon to our keeping the sample app and corresponding demo focused on their core values. Throughout the event, we explored various corners of additional features and context, which is good, but we also focused on the goal and used those explorations to shape the final product, not be included in the final product. Remember, the product of the hackathon is a pitch.

Less is more, until it’s less; and that’s the sweet spot.

I’m here to tell you that it is VERY DIFFICULT to throw stuff overboard to keep the ship afloat (one look into my garage will prove that point), but sinking due to an excess of good (and great) ideas, doesn’t help anyone (between you and me, building a larger garage doesn’t always help, either).

Keep your eye on the prize.

There’s a need

The positive reception to the idea we pitched shows that the idea has some legs. The audience immediately recognized the need as clear and they saw the value in the solution we presented.

The importance and significance of that  cannot be overstated. i.e.

Having a recognizable value IS HUGE!

There’s an opportunity

The post-hangover question to ask is, “how to realize the opportunity?” The solution will cost something. It’s not clear how much, though. The pitch and and overwhelmingly positive response from the audience surely will help appeal to and attract additional supporters, such as supporters, developers, designers, beta-testers, and the like. There might also be some interest in and market for a similar product in commercial applications. So..

What’s next?

That remains to be seen, but, for now, we have a website.

Some kinda fun.

My first hackathon experience

Hacking at the hackathonThis past weekend (Nov. 6-7, 2015), I attended the Seattle Social Justice Hackathon sponsored by the Seattle University Law School. I detailed my experience in a longer post, but here are the highlights.

The theme of the hackathon, making justice more accessible,  appealed to my latent activist persona (who doesn’t get out as much as he should), and I felt that bringing together a collection of designers, developers, lawyers, and social activists would, if nothing else, make for an interesting evening.

I wasn’t disappointed.

Our challenge

The problem I was drawn to was a way to help self-represented litigants (SRLs), a.k.a. pro se litigants. There’s a saying that, “A lawyer who represents him/herself has a fool for a client.” If that’s true for lawyers, what about someone with no legal training?  I know how difficult that situation is because I’ve been there, and done that. I would learn that my experience was not uncommon and that an estimated 36-million people become pro se litigants each year in the U.S. and they need help in many ways.

We formed a well-rounded team of six to tackle the problem: myself, the project owner/sponsor, a UX designer, and three developers. We dove into the problem and came up with a plausible technical solution in pretty short order, so the developers got started putting that together. As they worked, the sponsor, UX designer, and I started to understand the problem space in more detail.

The Court Whisperer is born

Our prize-winning app
Our prize-winning app

Normally, I wouldn’t recommend starting development before understanding the problem, but we understood enough of the problem and limited the scope of the technical solution to the part we understood well, so it worked out OK. Our understanding of the problem’s context evolved throughout the course of the event, hitting an abrupt pivot when a subject-matter expert (a law professor, no less) pointed out that an app that offered forms could be considered providing legal advice. Without missing a beat, we changed the context of our solution from an app that offers forms to an app that opens forms and then let the user enter data by speaking or typing it in while on their phone.

Fortunately, this didn’t alter the scope of the technical solution and so the developers barely noticed the change.

Dodging that bullet, we continued.I was the team’s PowerPoint wrangler and after several sessions with “pitch coaches,” I made a presentation with the fewest words ever (for me): Seven (with a vocabulary of only four). This event was a first in more ways than one.

We won first place!

Finishing the presentation, including the demo, with something like 57 seconds to spare, we impressed the audience with our story of how the Court Whisperer would help the estimated 36 million SRLs who interact with the courts each year in the U.S.

Apparently we also impressed the judges who gave our team first place and we’ll proceed with two other teams to another competition, next January.

Aircraft headset adapter – reflection

The finished adapter in service.
The finished adapter in service

Reflecting on my adapter project, I can’t help but compare it with the timer project–even if that’s not fair on many levels,

The project went pretty much as I expected. The circuit was simple and tested. There weren’t many known unknowns, but there are always the unknown unknowns.

On top of all this, I started the adapter project with a few more years of experience than I started the timer project.

It’s amazing how far in advance you can see the problems when you know what you’re looking for.

Unknown unknowns

They are what keep things interesting. In this case, it was the white Cat-5 Ethernet cable I used to connect to the radio. This cable has the double advantage of being inexpensive and having the connector attached. It has eight conductors arranged as four twisted pairs. The catch is the pairs make sense for the signals they are designed to carry, not my application. Bottom line, they pairs are not connected in a 1,2,3, 4… order, but as 3,4,5,6,1,8,2,7. Not the order I expected (even after researching the cable).

Measure twice, cut once, is how the carpenter’s saying goes. For this project it might have been better phrased, Measure twice, cut once, and then test two or three times.

Fortunately, no harm came to the project or the radio to which I connected and tested it (i.e. I was very lucky).

The biggest challenge was in the mechanical design, not the electrical design. I spent most of the time finding components and fitting them into as small of a box that I could. Arranging the components on the circuit board to line up the microphone-level adjustment potentiometer beneath the headphone jack was another mechanical challenge. Doing all this before drilling holes in the rather expensive ($17) case was another challenge.

Unfair comparisons

How did this project compare to the timer project?

  • Complexity: This had far fewer components and a much simpler function than the timer.
  • Tools: This project was designed and tested on the computer and built on tried and tested circuits whereas the timer was built and the components were fitted without any computer assistance.
  • Information: I was able to obtain electrical and mechanical information online to support the computer testing and design for this project and while this information was also available for the components used in the timer, it had to be applied manually (i.e. copied from a book).
  • Experience: Like I said, this wasn’t really a fair comparison. This was built with the help of many years of project experience.
  • Tooling: In this case, the comparison is actually fair in that both projects were fabricated with simple hand tools. It would have been nice to make a printed circuit board, make the holes with a drill press, and use something besides plumbers’ tape, but it wasn’t necessary.

So far, the adapter has been working as designed for the past two years, and still looks pretty good (even if I do say so, myself).

Aircraft headset to ham radio adapter

Aircraft headphone adapter mounted to ham radioCatching up on long-overdue blog posts, here’s a project I did a couple of years ago to adapt an aircraft headset to my Yaesu FT-857D ham radio. Aircraft headphones are optimized for the high-noise environment of an airplane—the earmuffs keep the ambient sound (noise) out and the microphone is designed to let only the pilot’s voice in. While I haven’t used my ham radio in a plane, the headset works well in other noisy environments.

OK, but why an adapter?

A little history

Aircraft communication systems had their formative years in the 1930s and 1940s; so much of the communication technology used in today’s aircraft was designed to meet technical requirements established back in the ’30s.

For this project, the microphone technology standards are what drive the requirements. The electrical standard for aircraft microphones is based on the carbon microphones used back in the formative years. Besides being available in the 1930s, carbon mikes are naturally noise cancelling and work well in an airplane. They also need some electrical current to work. This adapter provides the current necessary to make an airplane-compatible microphone work with a ham radio and it adapts the connectors–the headset’s connectors are also not compatible with those used by the ham radio.

The project

The design project was more mechanical than electrical. The headphones (speakers) on the headset are electrically compatible, so only a connector adapter is necessary. The electrical circuit used is a composite of several I found on the web and in QST, the magazine of the ARRL, and requires only a few components. Nevertheless, the hard work was in getting it all to fit in a box. Unlike my timer project, looks and durability were important design criteria.

Some of the requirements:

  • The high-order bit: it had to adapt an aircraft headset to the ham radio.
  • The adapter case had to support the stiff headset connectors.
  • The case had to be as small as possible, but large enough for all the connections and cables.
  • The cables and connectors had to survive multiple connections and disconnections.
  • The case had to be removable but, when mounted to the radio, mounted securely.

The results

The end result came out rather well and has survived two years of domestic and overseas deployments. With any luck, it’ll survive as long as the timer project I described a while back.

HeadsetAdapterBox contains the mechanical drawings, circuit schematics, and parts list, if you’re interested in building one. (Some assembly required–actually, ALL assembly is required!)

The inner workings of the finished adapter
The inner workings of the finished adapter
My favorite design detail: positioning the microphone gain control so that it can be adjusted without opening the case--it is accessible through the headphone jack.
My favorite design detail: positioning the microphone gain control (the blue square with a white circle, located in the center of the image) so that it can be adjusted without opening the case–it is accessible by inserting a long, insulated, adjustment screwdriver through the headphone jack (the connector to the left of the white cable). Of course, after all this, it didn’t need adjusting.
The finished adapter mounted and ready for service.

Update

This topic has received a lot of traffic, lately, so I thought I’d add some additional links. I’ve not been able to find links to the ARRL references I mention above. When I do, I’ll add them here. Also, I don’t want to give the impression that I invented any of this. The circuit is very common. Mine derives from Aviation Headsets for Ham Radio, which derives from Aviation Headset Connected to FT-897, posted a few years earlier. As I recall, one reference I found for this application was from the 1990’s. I adapted these to my particular application and, if anything, my contribution to the project was in the mechanical design, more than the electrical circuitry. Along those lines, I will say that after the past few years of use, the industrial Velcro attachment method has proven to be quite reliable.

If I had to do it again, I would add an external speaker jack and switch. Plugging in the adapter to the radio’s headset jack disables the radio’s speaker. That’s great when you’re operating by yourself, but when there are people huddled around you who also want to listen (and you don’t want to give up the convenience and clarity of the headset), not being able to also output the audio to a speaker can make things awkward.

For your convenience, and to save you from rummaging through Google, here are some more links on the subject. You’ll notice they have a lot in common, with variations to accommodate their individual applications. Some of the pages were posted since my article, and they are in no particular order. They are here only to help you make the best adapter for your application. Enjoy!

Best practice…for you?

Last week, I saw a post in LinkedIn about a “new” finding (from 2012) that “New research shows correlation between difficult to read fonts and content recall.” First, kudos for not confusing correlation and causation (although, the study was experimental and did prove a causal relationship), but the source article demonstrates an example of inappropriate generalization. To the point of this post, it also underscores the context-sensitive nature of content and how similar advice and best-practices should be tested in each specific context.

Hard to read, easy to recall?

The LinkedIn post refers to an article in the March 2012 issue of the Harvard Business Review. The HBR article starts out overgeneralizing by summarizing the finding of a small experiment as, “People recall what they’ve read better when it’s printed in smaller, less legible type.” This research was also picked up by Malcolm Gladwell’s David and Goliath, which has the effect of making it almost as true as the law of gravity.

Towards the end of the HBR article, the researcher tries to rein in the overgeneralizations by saying (emphasis mine), “Much of our research was done at a high-performing high school…It’s not clear how generalizable our findings are to low-performing schools or unmotivated students. …or perhaps people who are not even students? Again, kudos for trying. Further complicating the finding stated by the HBR article is that the study’s findings have not been reliably replicated in subsequent studies, other populations, or larger groups. I’m not discounting the researcher’s efforts, in fact, I agree with his observation that the conclusions don’t seem to be generalizable beyond the experiment’s scope.

Context is a high-order bit

All this reinforces the notion that when studying content and communication, context is a high-order bit1. As a high-order bit, ignoring it can have profound implications on the results. Any “best practice” or otherwise generalized advice should not be considered without including its contexts: the context in which it was derived and the context into which it will be applied.

This also reinforces the need to design content for testing–and to then test and analyze it.



1. In binary numbers, a high-order bit influences the result more than any and all of the other lower-order bits put together.

What I learned by building a project from scratch – Epilogue

v1.0 of my camera self-timer
v1.0 of my camera self-timer

My last two posts described a project that I took from conception to working prototype.

Post 1     Post 2

Finding it in my garage gave me the chance to reflect on the design process through the lens of historical recollection. While historical recollection isn’t the most accurate process, given how memory works (or doesn’t), the self-timer I found in the garage is really just an artifact to focus my reflection.

Reflection

The first thing I realized was what I didn’t realize at the time. What I recall at the time was how each subsequent step (Idea, design, working demo, prototype, and beyond) seemed to require about an order of magnitude more effort.

  1. The idea? About 5 seconds.
  2. The sketch? 5 minutes.
  3. The design? 5-8 hours.
  4. The working demo? A couple of days.
  5. The prototype in the photo? About another week.

That ratio has been reinforced in my subsequent experience as a software developer and even as a writer. What I didn’t appreciate at the time (or for a long time after this project) is how often I would try to be convinced that this wasn’t going to the case in this project (whatever this project was at the time).

Only experience would reveal that.

Lessons

Looking back on this project as a way to learn the lessons I listed in the earlier posts, I wonder if I could have learned those lessons at the time–while building the project. Maybe. What I realize now is that I didn’t recognize the significance of what I learned during the project.

Feature creep? Happens all the time and requires a constant vigilance and focus on the goal to avoid.

The illusion of a prototype/demo being the final project? Another element that has profound impact on the perception of the project–one that can work for or against you, depending on the circumstances.

“…but it looked good on paper…” Definitely a lesson on the importance of understanding how implementation details can pull the finished project away from the original design.

The problem with these lessons is that their impact is often felt long after the pivotal event. Feature creep? It always looks like a good idea when nothing costs anything (i.e. it’s just another line on the drawing or another bullet point in the spec). However, when release is delayed due to integration issues that result from the extra bullet point, the fingers point to implementation, testing, etc. everywhere but back to that moment in the conference room in which the bullet point was added–further masking its effect.

Agile methodologies

Since this project, I’ve learned a lot more about projects and project management. Looking back on this, now, gives me the perspective to appreciate how Agile methodologies do a lot to shorten the distance between events of each of the effects listed above and their effects.

Maybe that’s the lesson to take away from the reflection?