Tuesday, June 27, 2017

Information is gushing toward your brain like a fire hose aimed at a teacup. (Doug Adams)

I think we can all identify with this one. I mean, honestly, how many emails do you get in a day?

The real question, though, is whether you can identify with your users in this regard. How complex is your website, or your app, or your software?  Does your system simply add to that information? Are you part of the problem? Or have you made an effort to be part of the solution? 

Now, as usability engineers, we have no shortage of opportunities to help out our company’s users in this regard. We also, however, need to remember our own users – the project teams we run tests for. In particular, there are a number of things we can do to make our primary deliverable to them – our reports – more effective. 

Our main goal here should be to take the fire hose of information that is the result of a usability test and turn it into something more like a tea cup’s worth. Now, the first thing to realize about this is that it is not easy. It can be very tempting to simply refashion that fire hose worth of stuff. It takes a lot of work to figure out what the flood of information actually means and boil it down to something a lot more consumable.

One thing I’ve found very useful is something an art director at a former employer shared once in a presentation about how to create better presentations. I forget the exact numbers, but it was something along the lines of 10-20-30 – meaning 10 pages, 20 words per page, and 30-pt font. I do pretty good with the last two, but found that I did have to double that first one. That said, when I subtract all the titles pages, appendices, and background info, it actually does come pretty close to the 10.

Here are some other things you can do to make your reports more easily processed:

  • Include an executive summary
  • Separate out methodological stuff and boil it down as much as you can (other than other usability engineers, very few people seem to care)
  • For results, try to include only one idea per slide
  • Use a simple, distinct, consistent layout – make it easy for readers to identify your finding, evidence (quotes, pix), and solution

Whatever you do, just be sure you can never be accused of do what I say, not what I do. 


Scott Adams, creator of Dilbert, always had a particular appreciation for usability problems

Friday, May 12, 2017

A foolish consistency is the hobgoblin of little minds. (Ralph Waldo Emerson)

Now, this one is another maxim for more advanced clients (as is this one). The challenge for less experienced clients is to recognize that consistency is actually quite a good thing. They’re the ones more likely to have a blue button on one page and an orange one on another, to have title case for one field and sentence case on the next, to have the call to action at the bottom left on one screen and the bottom right on the following one …

Unfortunately, once they really learn the value of consistency, these folks now have to start dealing with exceptions to the rule. Note, though, that this is a good problem for you to have. A rigidly consistent, even overly consistent, system is a ton better than a system that is just total chaos.

As an example of the former, I just had to deal with a consistency problem involving menus. My company offers multiple products – bank accounts, mortgages, car loans, etc. Our menus for each of these is typically a list of the actual products, but also some subsidiary and educational material as well, with these three areas marked out as sub-menus.

We are currently getting into an additional product area, and came up with a menu that includes 1 actual product, 1 bit of subsidiary material, and 1 tutorial. Needless to say, this is quite different from our other product areas, which typically include multiple examples of each type. Even so, some of the team wanted to still use the same organizing scheme – which basically amounted to three one-item lists.

Another common problem is parallel lists – i.e., lists whose items are all similar. I’m a huge fan of these, but sometimes it just doesn’t work. For example, you might have a menu with a bunch of tasks. It makes a ton of sense, then, to make these all start with action verbs – “edit,” “delete,” “create,” etc. Sometimes, though, these menus will also include some support material. But changing that from, say, “help” to something like “Getting Help” or “Understanding how to use …” is just wordy, unnecessary, and possibly confusing.

So, here are some things you can ask the overly consistent to get them to focus on, not just following the rules, but knowing when to break them as well:

  • Are these things truly the same? 
  • Is there a good reason to make this one stick out a little more? 
  • How will the user perceive them?
  • What do we accomplish by making these consistent?
  • What confusion, awkwardness, or other unintended consequences might we cause as well?

By the way, though, do not use this quote! I have a funny feeling that might not go over that well.


I’m thinking this one might have been Photoshopped

Thursday, May 11, 2017

Biased questions lead to biased answers. (Anonymous)

To be honest, I think – at least when it comes to usability testing – all questions lead to biased answers. 

As I was just telling one of our new usability engineers the other day, a usability test is not an interview. They’re supposed to be task-based. And what that means is that, other than prepping the user for think-aloud and giving them a scenario, you need never say another word. It’s all about the user completing the task and vocalizing what they are doing and thinking. The perfect test is one where you just sit there and take notes. 

Now, though I do get these every once in awhile, imperfect tests are much more common. Users only rarely will make it that easy for me. Most of the time, I have to work a little bit harder to earn my pay.

Indeed, there are no shortage of times when you have to say something. Most often, the user may simply fail to do the think-aloud. I think all usability engineers are familiar with the classic “What are you thinking?” just for those situations. Variations on this include, “What just happened?” “What’s going on?” “Is that what you expected?” and so on. And, once users do start talking, a simple “un-huh” is usually enough to keep them going.

Even if users are doing the think-aloud, though, sometimes that’s not enough. Frequently, they may leave thoughts hanging. They might say, “That’s a nice …” or “I don’t know if …” or “How did that …?”  Because we humans hate to leave anything incomplete, simply repeating what they just said will usually prompt them to complete the thought.

You can use a similar trick to get users to elaborate on vague comments like, “That’s not good,” or “I don’t like that.” Simply repeating their phrase will get them to almost magically add the “why” (and in a way that sounds a lot less accusatory than simply asking them why).

All in all, the less questions the better. And, if you do have to throw in a few, make it so they don't even sound like questions.


A lot of this advice came from an excellent talk Judy Ramey (at the University of Washington) gave at a UXPA conference many years ago

Wednesday, May 3, 2017

It’s not your job to guarantee a happy ending. (Philip Hodgson)

I like being a critic. It’s kind of fun pointing out everyone else’s foibles. And let me tell you, being a usability engineer is a great way to do that. Other people’s errors are on parade before you in test after test after test. Great fun!

Seriously, though, I’m actually not really a sadist. In fact, I’m known in particular for making a special effort to be positive when reporting results. I figure I’m just accounting for basic human nature here – nobody wants to hear their baby is ugly. And if you do need to broach the subject of some possible unsightliness, well, it’s generally a good idea to have some sugar to go along with that medicine. In general, I’d much rather be, not the scold who everyone hates and ignores, but the nice guy who actually has some valuable advice that might be worth listening to every once in awhile.

Now, it is pretty rare that I get ugly babies with absolutely no redeeming qualities. Almost everything has something good to be said for it. So, that part of it isn't really that hard.

That said, there are times when you do have communicate that, yes, we do indeed have a problem here, Houston. But even in those situations, there are still some things you can do to soften the blow.

One of these is to make sure that your client is aware that there might be a problem as testing progresses. In other words, don’t wait until the report out to bring attention to serious issues. No one likes surprises like that.

So, one thing you’ll want to do is make sure you debrief after every test. You can also send out topline reports – not full reports, but quick summaries – at the end of each day. Finally, you can also get your team to see if they can come up with a solution to any serious issues, say, midway through the test. (In fact, overcoming a problem can feel like an even more positive experience than simply having no issues pop up.)

Another thing I’ve found helpful is to allow the client to vent a little. I just try to put myself in their shoes (I know I’m a venter myself), and try not to take it too personally. Easy to say, but it really does take a little practice to get comfortable with.

Along similar lines, you’re going to have to also make sure that your data is pretty well buttoned-up. They say the best defense is a good offense, and I’ve seen plenty of clients who really go on the offensive when they hear something they don’t want to. In those situations, once again, I counsel remaining cool and calm as you fend off attacks on your methodology, users, prototype, personal integrity, whatever.

A final thing you can do is to do a good job of picking your battles. It’s pretty rare for me to fall on my sword for an issue. And that probably is something that just comes with experience. After doing this for 30 years, I know that it’s rare for something to be a real show-stopper. But there definitely have been some cases, over the years, where data from tests I ran caused products to be pulled, releases to be moved out, or projects to be shut down. 

Just be a little mindful about how to communicate results like those.


Philip is the owner of Blueprint Usability

Tuesday, April 18, 2017

Whereof one cannot speak, thereof one should be silent. (Ludwig Wittgenstein)

How tempting it can be though.

Imagine you’re in a report-out, and your client is eating up everything you have to say. You’ve really got them in the palm of your hand.  Why not just share that pet theory you had? Now, you can really only tie it to that one user. And their quote was actually pretty non-committal at that. But, heck, why not? You’re on a roll! Share this gem, and they’ll really love you.

On the other hand, you’ve also got much less positive scenarios as well. For example, one of your clients might have a great question, but – unfortunately – you don’t really have any data. Maybe it never occurred to you to collect it, and thus never made it into your test script. Perhaps you neglected to probe when the situation came up. Maybe you didn’t recruit enough of that particular type of user to really be able to say.

In fact, that last scenario is something I face all the time. Everyone needs to remember that we are dealing with qualitative data here – and the small number of people that qualitative research typically involves. Now, those numbers might be fine to show some usability mishap (a mislabeled button, a hidden link, a missing step), but when it comes to things that are more preferential, it can be hard to really say when all you’ve got is a couple of data points.

Another issue that sometimes comes up is when it’s just six of one, half a dozen of the other. In other words, there’s data for one side of an argument, and there’s data for the other. Now, you’ve most likely got a client with strong feelings in one direction (heck, you might even yourself). So,  they’ll probably feel just a little bit cheated: “Hey, I paid you all this money. I expect to see some real results. How am I gonna make a decision now?”

Basically, all it really comes down to is how comfortable you are saying, “I don’t know.” Interestingly, though, I’ve found that that will actually generate more respect for you in the long run.


And, yes, I know I’m taking this quote totally out of context  ;^)

Wednesday, March 1, 2017

The perfect is the enemy of the good. (Voltaire)

Usability is not physics. It is not a pure science. 

Heck, even though I call myself a “usability engineer,” I know what I do is honestly pretty iffy engineering. And I should know – I put up with real engineering school for two years before calling it quits.

What usability does, however, have in common with “real” engineering is a focus on practical solutions and on real data. Now, there was a time when that data was pretty darn hard even for usability – basically, error rates and time on task. Usability engineers found, though, that that hard data was lacking an important component. That data really didn’t tell anyone why users failed, why things took so long, what project teams could do to make things better. Softer, more qualitative data, however, did.

So, you may run across clients who still insist on that hard data, especially if they have a quant background (for example, from business or marketing). In that case, you have to sell the richness of qualitative over the certainty of quantitative. And for some clients, you will definitely have to overcome the idea that qualitative data is less pure, less perfect. In those situations, I like to emphasize what we do with the data – in particular, that soft data can be a lot more actionable than hard. (It also typically eliminates the conjecture that actually comes when teams move on from gathering their hard data and then try to interpret what it means and how to come up with solutions.)

A similar issue usability engineers have to deal with has a lot to do with the numbers thing. I cover that in “Not everything that counts can be counted, and not everything that can be counted counts” (which is a quote from Albert Einstein).

Finally, there is the issue of the perfect test. And I’ve talked about that before in, “The test must go on” (I’ve got Laura Klein down for that one). 

Ultimately, the final decision can come down to running an imperfect test or never getting around to running that perfect one. And we all know that there's nothing perfect about the latter.

Usability is really the art of the possible.  We do what we can.  Like I tell my clients, give me what you’ve got, and I’ll be sure to get you something of value in return.




But then again, there’s this!

Thursday, February 9, 2017

Unfortunately, it's hard to differentiate in the software between someone who wants to go on a voyage of discovery, and someone who just wants to open a file. (Peter Flynn)

Now, what’s sad here is that I can almost guarantee that your design team (or marketing partners or senior execs) will err on the side of the former. It can sometimes be very hard for them to realize that this thing they’ve worked on, thought about, and agonized over for months, if not years, is really just a means to an end, a tool that users employ with little actual regard for the tool itself. 

Unless, that is, the tool was designed for some other purpose than to help those users achieve their goals … If, for example, it was designed with someone’s portfolio in mind, or to impress the division manager, or to get recognized in some magazine or on some website. Now, this will draw some attention to your tool. Unfortunately, at least when you’re talking about your users, that will almost always be attention of the negative kind. 

In general, users want tools that don’t draw attention to themselves. To them, your UI would be best if it were totally transparent, even invisible. 

And if your UI needs lots of training, that’s even worse. Note that that includes traditional kinds of training like manuals and videos, and more up-to-date, subtle means like coach marks and what’s-new content.

Now, of course, there are certain user types who do like to go exploring. These users are often techy types, and sometimes really do want to learn the tool and develop a mastery of it. Good for them! I’m not sure we need to develop our system around them though. Perhaps if we just offered some option so that they could go on that voyage without forcing everyone else to. Maybe a link to a tutorial, maybe an expert version of the software …

The important thing, though, is to concentrate on the user and their goals, instead of on the tool. 


Peter is at University College Cork, one of the better schools for usability in Europe