Wednesday, December 13, 2017

If you have no critics, you’ll likely have no success. (Malcolm X)

I never thought I’d be quoting Malcolm X in a blog about usability. In fact, I originally had no idea I was quoting him. I had found this quote in a fortune cookie, of all places. It was only when I Googled the quote that I found that it was from the father of Black radicalism.

It is, however, a quote that can be applied to many different fields. Within usability, I see it applying in a number of situations.

Most obviously, this sounds like something I’d share with my project teams. I’ve found, over the years, that good designers can’t wait to get real user feedback. They tend to have thicker skins, and can roll with the punches. And I like to point out, and congratulate them, on their ability to do so. 

Needless to say, we can also turn the tables on ourselves. Once again, I’ve found that the better usability engineers are the ones who are always learning things and looking for a better way to do their jobs. They tend to practice what they preach, and let humility be their guiding principle. But this probably just comes from being a social scientist. I’ve found that, in every test I run, I learn something new – whether that’s about computers, people, or myself.

Finally, I think this maxim applies to acceptance of usability in general. Having been in this profession for 30 years allows me to take the long, historical view. In particular, I remember way back when the resistance came from the techies (a time which Alan Cooper so brilliantly captured in the title of his book The Inmates Are Running the Asylum). 

Next, it seemed to come from graphic designers. Though they did us a service by adding more pleasure and delight to the user experience, they tended to divorce those considerations from more practical considerations like usability and company profit. Jakob Nielsen, for some reason, seemed to be the brunt of a lot of their disapproval.

Lately, it seems resistance is coming from the business side. In particular, I worry that all a site is these days is a sales or purchase funnel, where users are hustled along without any time to explore or ask questions – to me, at least, the online equivalent of a used car lot. Business does have the numbers on their side – with analytics, big data, and A/B testing – but do I worry that they sometimes may be missing the forest for the trees.

And then there's Agile ... Not only does this seem like the return of the techies, but it seems like, this time, they've teamed up with the business side against us.

Ah well, it’s always something, or somebody, isn't it? I actually think the dialectical nature of all this is good for usability. It shows that we can adapt, incorporate other viewpoints, and even act as a mediator sometimes.

 



And you thought I was making that up

Thursday, November 9, 2017

Web users ultimately want to get at data quickly and easily. They don’t care as much about attractive sites and pretty design. (Tim Berners-Lee)

Wow! I never thought I’d be disagreeing with Sir Tim Berners-Lee. But there you go …

Now, there was a time when I would have heartily agreed with him. And my guess is that this quote is probably from a long time ago as well.

Yup, it’s hard to believe, but there was a time when information was paramount. The Internet was simply a place where you went for data, plain and simple. I think the idea was that having all the information in the world at your fingertips was enough. Anything that could possibly get in the way – and that included design and aesthetics – did indeed get in the way, and really shouldn’t be there.

(You can still see that approach in sites like Craig’s List, the Drudge Report, and even Jakob Nielsen’s Alertbox. In fact, Jakob just put out an article on what he calls “brutalist” web design – borrowing the term from architecture – discussing this very style.)

Fortunately or unfortunately, something called eCommerce then occurred. Yup, companies started using the Internet to sell things. (It wasn’t just random people and organizations dumping data there anymore.) Further, those companies wanted to use all the methods at their disposal to convince you to buy their stuff. 

At the same time, web coders started introducing all sort of techniques that could make websites less just pure HTML and more along the lines of print or TV or whatever the client desired. Finally, there were also studies done that showed that users preferred more aesthetically pleasing designs. In fact, comparative testing of the same site in a “brutalist” style and a more aesthetically pleasing one actually made users think that the more aesthetically pleasing ones were more usable as well.

The age of the graphic designer was at hand (and, honestly, we’ve never really looked back). Unfortunately, some of those graphic designers got a little carried away. In fact, given free rein, these folks went a little over board, and started making sites that were, not only attractive, but “different,” “fresh,” and even “challenging” as well. In other words, the design had ceased to help the user and business achieve their goals, but interfered with those, becoming something of an end in itself.

Now, here’s the thing …Usability and aesthetics need not be in conflict. In fact, a really great design will have them working hand in hand, seamlessly, to help the user and the business (if not necessarily the graphic designer) meet their respective goals as efficiently and effectively as possible.


Hard to believe, but I think that screen was Photoshopped!

Monday, October 9, 2017

The most valuable question could be the one that’s not asked. (Anonymous)

And that’s why I’m not so crazy about surveys.  Sure, you can include an Other field and, yes, you can test your survey out beforehand.  Heck, you can even do some qualitative research (e.g., a focus group) upfront to help guide the creation of your survey.  But there will still be plenty of things you miss, par example:
  • Respondents may not be able to express their issues in words.  Believe me, “This sucks!” and “I hate this!”  are not going to provide you with a lot of actionable data.
  • They may also have gotten so used to work-arounds that they might not think to ever bring up the issue the work-around is for.  Along those lines, they may have also no idea that something could even be better.  
  • More importantly, they may simply fall prey to the limits of human memory, not remembering, for example, that terrible issue that came up last month and that they’ve successfully managed to totally repress since then.  
  • Finally, you simply cannot assume – no matter how diligent you’ve been – that you have encapsulated their whole universe.  Unless you are a user yourself – and your team, further, represents every persona for that user out there – there will still be plenty of things you just can’t possibly foresee.

How to get around this problem?  Well, why not go straight to the users themselves?  In particular, why not let the user tell you or, even better, demonstrate how they feel, think, and behave.  That’s where in-depth interviews, usability tests, and ethnography come in.

Yes, I do typically have a list of things I want to cover when I do these kinds of studies.  What I’ve found, though, is that, if it matters to the user, they’ll be certain to let you know.  And this will come, not from your asking about it, but from them volunteering – either free-form or prompted by some specific task they’re doing.

Now, if something does not come up – and my team really wants some feedback on it – I will probe.  I always tell my team, though, that this is really an instance of a bird in the hand being worth two in the bush.  In other words, if the user brings up or uncovers something on their own, that’s going to be a lot more valuable than if I force them to come up with a response.  

In fact, I usually tell my clients that there is a definite hierarchy of feedback, based on how the feedback came about:
  1. Independently, through a task
  2. Independently, through interviewing
  3. Through probing, but only at the highest level (e.g., “How did that go?”)
  4. Through probing a little deeper (“Any thoughts on page X?”)
  5. Through direct questioning (“What did you think of field Y?”)

Note that I would never go any further than that last instance.  In other words, “Did you understand what that was for?” “Is that link prominent enough?” and “Is that the best place for that button?” are simply leading the witness, and I won’t stand for them.

Now, do surveys have a purpose?  My, of course, they do.  If it’s numbers you’re after, surveys are up there with web analytics and A/B testing.  Note, though, that quantitative methods may all lack two very important things:
  • The ability to follow up
  • The ability to get at the real reason why

And that’s why I usually recommend triangulation – quantitative and qualitative, surveys and tests, analytics and interviews …  And, believe me, that should about cover it all.




Thursday, September 7, 2017

A wealth of information creates a poverty of attention (Herb Simon)

Information density is one of my favorite issues. I’m pretty sure it crops up on every test I run.

And the particular problem I run into is typically the one that Simon points out here. If I had a dollar for every time I’ve written that n users couldn’t find x, and that we should “consider simplifying the UI,” I’d be a (very slightly) richer man. 

It’s also pretty obvious where it comes from too. Marketeers and execs are famous for wanting to provide as many features as possible, but to also make sure that everything is just one click away. Faced with this kind of “logic”, I try to point out the total fallacy of the old one-click rule and the primacy of information scent, but it can sometimes be a struggle.

I also like to point out that simpler interfaces are just easier to use. I think, these days at least, most people do get that. I only need to point to Google, or apps, or Nest, or even Tindr to make my point. 

I also, though, like to get them to think about the user’s experience as a whole. And that means not just whether users see each marketer or exec’s particular pet feature. 

I try to get across that a little here, a little there amounts to something I like to call “additive design,” a guaranteed way to get a system or site or app to sink under its own weight. I try to get them to consider that, as Shlomo Benartzi has written, “every new feature comes with a cost.”

Finally, I also like to tie in what I call the simple, basic “yuck factor.” My users are great at providing me with ammunition for that. If I had a dollar for every quote where a user basically says “TMI!” …  Even better, I’ve found that these quotes are typically some of the most pithy ones I get.

What I really try to get across to my clients in this instance, though, is that this visceral reaction on the user’s part has some very real consequences. In addition to making things harder to find, it oftentimes make the user not even want to try, and just give up. And that - and just clutter in general - can have a major negative impact on brand perception. And we all know how worked up marketeers and execs can get about that.


Herb Simon was quite the guy. A Nobel Prize winner, he also won the Turing Award, and made major contributions to psychology and sociology as well. He’s also the father of the “think-aloud protocol,” the basis of pretty much every usability test that's run. I was lucky enough to have met him in grad school.

Wednesday, August 30, 2017

For every complex problem there is an answer that is clear, simple, and wrong (H.L. Mencken)

A/B testing sounds like a pretty darn good idea.  In fact, on the face of it, it seems to have some real advantages over good, old-fashioned usability testing:

  • Numbers – If you’re working on a well-trafficked site and on a well-trafficked part of that site, you will probably be getting results in the 1000s. Now, compare that to the 7 to 12 you might get on a usability test.
  • Genuineness – A typical beef with usability testing is that it’s artificial – the users are in a lab, being observed, and typically doing some role playing.  In A/B testing, though, they are actually using your site, to do things they want to do, and without ever knowing they are being “tested.” 
  • Relevance – A/B results are typically tied to specific results.  These, in turn, are tied to specific business goals – account openings, purchases, downloads.  In other words, this stuff can have a genuine effect on the bottom line.
  • Concreteness – There’s a joke among usability engineers that the profession’s motto should be, “It depends.”  We sometimes don’t realize how frustrating this can actually be, though, for our clients who are more business- or numbers-oriented. A/B testing, on the other hand, is definitely something that speaks their language. 

At the same time, however, A/B testing also has plenty of possible issues:

  • Comparative only – A/B testing is a comparative method only.  In other words, you need 2 things.  But what if you only have 1?  Does that mean you’ll have to come up with a straw man in that situation?  And what if the real solution happens to be C, which was never tested at all?  (Usability testing doesn’t have that liability, and can also be used for comparative testing as well.)
  • Finished product – Though A/B testing is great for getting feedback in real situations, that also means you can’t try something out before it gets to a polished, ready-for-prime-time state.  (In usability testing, you can get feedback simply using pieces of paper.)
  • Emphasis on details – A/B testing tends to focus on smaller issues, like button size or color or location.  Great stuff to know, but a pretty incomplete picture.  Who knows, maybe there were things that weren’t even considered that could bump up conversion even more.  How would you ever know?  (Usability testing looks at whatever causes users problems, whatever that might happen to be.)
  • Cumulative effect – Because A/B testing often means tweak this here, tweak that there, attention isn’t always focused on the overall or cumulative effect.  Yes, that marketing tile was effective on the home page in week 1.  And, yes, that call-to-action that was added in week 6 worked well too. Does that mean, though, that we can keep adding whatever we want as the weeks go on?  I am actually familiar with a site that did just that.  And, as is right now, it’s also ready to pretty much sink under its own weight.

  • Short-term focus – As illustrated above, A/B testing is often very much about short-term effects.  Now, that fits in very well with an Agile methodology (which often relies on A/B), but that approach might also backfire in the long run.  How, for example, will that cluttered homepage impact conversion down the road, or overall?  
  • Scalability – Along the same lines …  So, increasing the size of a button bumped conversion up by 2%?  That’s great!  So, why not just bump it up again?  In other words, how can we tell when we’ve passed the line of too much of a good thing?  Heck, why should we even really care?
  • Neutral results – A lot of the press around A/B testing tends to focus around dramatic results.  From what I understand from people who are really doing it, though, typical results tend to be more along the lines of an “eh” and a shrug of the shoulders.  Now, was all that effort and expense really worth it for that 0.01% move on the needle?  Even worse, what if both designs tested equally badly?  What other data would do you have in that situation to come up with something new and different?
  • Effect on standards – One particular kind of dramatic result seems to be when the findings break conventional thinking, or go against established standards.  Now, that’s pretty fascinating, but what exactly do you with it?  Does that invalidate the standard?  Is there something about this particular instance that would be a good example of “it depends”?  Do we need to test every instance of this particular design (which is what I’m thinking the vendor might suggest)?

  • What happens next – A/B testing focuses solely on conversion.  As I mentioned above, that can be a good thing.  At the same time, though, conversion doesn’t come close to describing the totality of the user’s experience.  What if the user successfully signed up for … the wrong account?  What if the user successfully purchased the product … but did so offline?  What if they downloaded that newsletter … but never read it?  What if they signed up for your service ... but it was such a painful process that they really don't want to come back? Unfortunately, siloed marketeers often don’t care.  Just getting the user “through the funnel” is typically their only concern.   How the whole experience might affect, say, brand is something else entirely.  To quote Jared Spool, “conversion ≠ delight.”
  • Unmeasurables – Those brand concerns above hint at this one. Note, though, that unmeasurables can be a lot less lofty as well.  I, for example, do a lot of work with online banking systems.  Now, these people are already customers.  What key business metrics might they align with?  Actually, they typically have their own, very diverse goals.  They might be all about checking their balance, or looking for fraudulent transactions, or paying their bills on time.  All we really can do is support them.  Indeed, there are tons of sites out there that are more informational or that involve user goals that are vaguer or might not directly align with business KPIs.
  • Why – Perhaps the most important drawback with A/B testing, though, is a lack of understanding why users preferred A over B.  I mean, wouldn’t it be nice to learn something from this exercise so that you can apply it, say, in other situations?  Honestly, isn’t understanding why something happened a necessary precursor to understanding how to improve it?  Unfortunately, A/B testing is all a bit of a black box.  

A/B testing is basically only binary feedback.  You essentially get a thumbs-up or thumbs-down.  But perhaps there’s more to it than that.  Perhaps it does after all depend …


H.L. Mencken working on some early man-machine issues

Tuesday, June 27, 2017

Information is gushing toward your brain like a fire hose aimed at a teacup. (Doug Adams)

I think we can all identify with this one. I mean, honestly, how many emails do you get in a day?

The real question, though, is whether you can identify with your users in this regard. How complex is your website, or your app, or your software?  Does your system simply add to that information? Are you part of the problem? Or have you made an effort to be part of the solution? 

Now, as usability engineers, we have no shortage of opportunities to help out our company’s users in this regard. We also, however, need to remember our own users – the project teams we run tests for. In particular, there are a number of things we can do to make our primary deliverable to them – our reports – more effective. 

Our main goal here should be to take the fire hose of information that is the result of a usability test and turn it into something more like a tea cup’s worth. Now, the first thing to realize about this is that it is not easy. It can be very tempting to simply refashion that fire hose worth of stuff. It takes a lot of work to figure out what the flood of information actually means and boil it down to something a lot more consumable.

One thing I’ve found very useful is something an art director at a former employer shared once in a presentation about how to create better presentations. I forget the exact numbers, but it was something along the lines of 10-20-30 – meaning 10 pages, 20 words per page, and 30-pt font. I do pretty good with the last two, but found that I did have to double that first one. That said, when I subtract all the titles pages, appendices, and background info, it actually does come pretty close to the 10.

Here are some other things you can do to make your reports more easily processed:

  • Include an executive summary
  • Separate out methodological stuff and boil it down as much as you can (other than other usability engineers, very few people seem to care)
  • For results, try to include only one idea per slide
  • Use a simple, distinct, consistent layout – make it easy for readers to identify your finding, evidence (quotes, pix), and solution

Whatever you do, just be sure you can never be accused of do what I say, not what I do. 


Scott Adams, creator of Dilbert, always had a particular appreciation for usability problems

Friday, May 12, 2017

A foolish consistency is the hobgoblin of little minds. (Ralph Waldo Emerson)

Now, this one is another maxim for more advanced clients (as is this one). The challenge for less experienced clients is to recognize that consistency is actually quite a good thing. They’re the ones more likely to have a blue button on one page and an orange one on another, to have title case for one field and sentence case on the next, to have the call to action at the bottom left on one screen and the bottom right on the following one …

Unfortunately, once they really learn the value of consistency, these folks now have to start dealing with exceptions to the rule. Note, though, that this is a good problem for you to have. A rigidly consistent, even overly consistent, system is a ton better than a system that is just total chaos.

As an example of the former, I just had to deal with a consistency problem involving menus. My company offers multiple products – bank accounts, mortgages, car loans, etc. Our menus for each of these is typically a list of the actual products, but also some subsidiary and educational material as well, with these three areas marked out as sub-menus.

We are currently getting into an additional product area, and came up with a menu that includes 1 actual product, 1 bit of subsidiary material, and 1 tutorial. Needless to say, this is quite different from our other product areas, which typically include multiple examples of each type. Even so, some of the team wanted to still use the same organizing scheme – which basically amounted to three one-item lists.

Another common problem is parallel lists – i.e., lists whose items are all similar. I’m a huge fan of these, but sometimes it just doesn’t work. For example, you might have a menu with a bunch of tasks. It makes a ton of sense, then, to make these all start with action verbs – “edit,” “delete,” “create,” etc. Sometimes, though, these menus will also include some support material. But changing that from, say, “help” to something like “Getting Help” or “Understanding how to use …” is just wordy, unnecessary, and possibly confusing.

So, here are some things you can ask the overly consistent to get them to focus on, not just following the rules, but knowing when to break them as well:

  • Are these things truly the same? 
  • Is there a good reason to make this one stick out a little more? 
  • How will the user perceive them?
  • What do we accomplish by making these consistent?
  • What confusion, awkwardness, or other unintended consequences might we cause as well?

By the way, though, do not use this quote! I have a funny feeling that might not go over that well.


I’m thinking this one might have been Photoshopped

Thursday, May 11, 2017

Biased questions lead to biased answers. (Anonymous)

To be honest, I think – at least when it comes to usability testing – all questions lead to biased answers. 

As I was just telling one of our new usability engineers the other day, a usability test is not an interview. They’re supposed to be task-based. And what that means is that, other than prepping the user for think-aloud and giving them a scenario, you need never say another word. It’s all about the user completing the task and vocalizing what they are doing and thinking. The perfect test is one where you just sit there and take notes. 

Now, though I do get these every once in awhile, imperfect tests are much more common. Users only rarely will make it that easy for me. Most of the time, I have to work a little bit harder to earn my pay.

Indeed, there are no shortage of times when you have to say something. Most often, the user may simply fail to do the think-aloud. I think all usability engineers are familiar with the classic “What are you thinking?” just for those situations. Variations on this include, “What just happened?” “What’s going on?” “Is that what you expected?” and so on. And, once users do start talking, a simple “un-huh” is usually enough to keep them going.

Even if users are doing the think-aloud, though, sometimes that’s not enough. Frequently, they may leave thoughts hanging. They might say, “That’s a nice …” or “I don’t know if …” or “How did that …?”  Because we humans hate to leave anything incomplete, simply repeating what they just said will usually prompt them to complete the thought.

You can use a similar trick to get users to elaborate on vague comments like, “That’s not good,” or “I don’t like that.” Simply repeating their phrase will get them to almost magically add the “why” (and in a way that sounds a lot less accusatory than simply asking them why).

All in all, the less questions the better. And, if you do have to throw in a few, make it so they don't even sound like questions.


A lot of this advice came from an excellent talk Judy Ramey (at the University of Washington) gave at a UXPA conference many years ago

Wednesday, May 3, 2017

It’s not your job to guarantee a happy ending. (Philip Hodgson)

I like being a critic. It’s kind of fun pointing out everyone else’s foibles. And let me tell you, being a usability engineer is a great way to do that. Other people’s errors are on parade before you in test after test after test. Great fun!

Seriously, though, I’m actually not really a sadist. In fact, I’m known in particular for making a special effort to be positive when reporting results. I figure I’m just accounting for basic human nature here – nobody wants to hear their baby is ugly. And if you do need to broach the subject of some possible unsightliness, well, it’s generally a good idea to have some sugar to go along with that medicine. In general, I’d much rather be, not the scold who everyone hates and ignores, but the nice guy who actually has some valuable advice that might be worth listening to every once in awhile.

Now, it is pretty rare that I get ugly babies with absolutely no redeeming qualities. Almost everything has something good to be said for it. So, that part of it isn't really that hard.

That said, there are times when you do have communicate that, yes, we do indeed have a problem here, Houston. But even in those situations, there are still some things you can do to soften the blow.

One of these is to make sure that your client is aware that there might be a problem as testing progresses. In other words, don’t wait until the report out to bring attention to serious issues. No one likes surprises like that.

So, one thing you’ll want to do is make sure you debrief after every test. You can also send out topline reports – not full reports, but quick summaries – at the end of each day. Finally, you can also get your team to see if they can come up with a solution to any serious issues, say, midway through the test. (In fact, overcoming a problem can feel like an even more positive experience than simply having no issues pop up.)

Another thing I’ve found helpful is to allow the client to vent a little. I just try to put myself in their shoes (I know I’m a venter myself), and try not to take it too personally. Easy to say, but it really does take a little practice to get comfortable with.

Along similar lines, you’re going to have to also make sure that your data is pretty well buttoned-up. They say the best defense is a good offense, and I’ve seen plenty of clients who really go on the offensive when they hear something they don’t want to. In those situations, once again, I counsel remaining cool and calm as you fend off attacks on your methodology, users, prototype, personal integrity, whatever.

A final thing you can do is to do a good job of picking your battles. It’s pretty rare for me to fall on my sword for an issue. And that probably is something that just comes with experience. After doing this for 30 years, I know that it’s rare for something to be a real show-stopper. But there definitely have been some cases, over the years, where data from tests I ran caused products to be pulled, releases to be moved out, or projects to be shut down. 

Just be a little mindful about how to communicate results like those.


Philip is the owner of Blueprint Usability

Tuesday, April 18, 2017

Whereof one cannot speak, thereof one should be silent. (Ludwig Wittgenstein)

How tempting it can be though.

Imagine you’re in a report-out, and your client is eating up everything you have to say. You’ve really got them in the palm of your hand.  Why not just share that pet theory you had? Now, you can really only tie it to that one user. And their quote was actually pretty non-committal at that. But, heck, why not? You’re on a roll! Share this gem, and they’ll really love you.

On the other hand, you’ve also got much less positive scenarios as well. For example, one of your clients might have a great question, but – unfortunately – you don’t really have any data. Maybe it never occurred to you to collect it, and thus never made it into your test script. Perhaps you neglected to probe when the situation came up. Maybe you didn’t recruit enough of that particular type of user to really be able to say.

In fact, that last scenario is something I face all the time. Everyone needs to remember that we are dealing with qualitative data here – and the small number of people that qualitative research typically involves. Now, those numbers might be fine to show some usability mishap (a mislabeled button, a hidden link, a missing step), but when it comes to things that are more preferential, it can be hard to really say when all you’ve got is a couple of data points.

Another issue that sometimes comes up is when it’s just six of one, half a dozen of the other. In other words, there’s data for one side of an argument, and there’s data for the other. Now, you’ve most likely got a client with strong feelings in one direction (heck, you might even yourself). So,  they’ll probably feel just a little bit cheated: “Hey, I paid you all this money. I expect to see some real results. How am I gonna make a decision now?”

Basically, all it really comes down to is how comfortable you are saying, “I don’t know.” Interestingly, though, I’ve found that that will actually generate more respect for you in the long run.


And, yes, I know I’m taking this quote totally out of context  ;^)

Wednesday, March 1, 2017

The perfect is the enemy of the good. (Voltaire)

Usability is not physics. It is not a pure science. 

Heck, even though I call myself a “usability engineer,” I know what I do is honestly pretty iffy engineering. And I should know – I put up with real engineering school for two years before calling it quits.

What usability does, however, have in common with “real” engineering is a focus on practical solutions and on real data. Now, there was a time when that data was pretty darn hard even for usability – basically, error rates and time on task. Usability engineers found, though, that that hard data was lacking an important component. That data really didn’t tell anyone why users failed, why things took so long, what project teams could do to make things better. Softer, more qualitative data, however, did.

So, you may run across clients who still insist on that hard data, especially if they have a quant background (for example, from business or marketing). In that case, you have to sell the richness of qualitative over the certainty of quantitative. And for some clients, you will definitely have to overcome the idea that qualitative data is less pure, less perfect. In those situations, I like to emphasize what we do with the data – in particular, that soft data can be a lot more actionable than hard. (It also typically eliminates the conjecture that actually comes when teams move on from gathering their hard data and then try to interpret what it means and how to come up with solutions.)

A similar issue usability engineers have to deal with has a lot to do with the numbers thing. I cover that in “Not everything that counts can be counted, and not everything that can be counted counts” (which is a quote from Albert Einstein).

Finally, there is the issue of the perfect test. And I’ve talked about that before in, “The test must go on” (I’ve got Laura Klein down for that one). 

Ultimately, the final decision can come down to running an imperfect test or never getting around to running that perfect one. And we all know that there's nothing perfect about the latter.

Usability is really the art of the possible.  We do what we can.  Like I tell my clients, give me what you’ve got, and I’ll be sure to get you something of value in return.




But then again, there’s this!

Thursday, February 9, 2017

Unfortunately, it's hard to differentiate in the software between someone who wants to go on a voyage of discovery, and someone who just wants to open a file. (Peter Flynn)

Now, what’s sad here is that I can almost guarantee that your design team (or marketing partners or senior execs) will err on the side of the former. It can sometimes be very hard for them to realize that this thing they’ve worked on, thought about, and agonized over for months, if not years, is really just a means to an end, a tool that users employ with little actual regard for the tool itself. 

Unless, that is, the tool was designed for some other purpose than to help those users achieve their goals … If, for example, it was designed with someone’s portfolio in mind, or to impress the division manager, or to get recognized in some magazine or on some website. Now, this will draw some attention to your tool. Unfortunately, at least when you’re talking about your users, that will almost always be attention of the negative kind. 

In general, users want tools that don’t draw attention to themselves. To them, your UI would be best if it were totally transparent, even invisible. 

And if your UI needs lots of training, that’s even worse. Note that that includes traditional kinds of training like manuals and videos, and more up-to-date, subtle means like coach marks and what’s-new content.

Now, of course, there are certain user types who do like to go exploring. These users are often techy types, and sometimes really do want to learn the tool and develop a mastery of it. Good for them! I’m not sure we need to develop our system around them though. Perhaps if we just offered some option so that they could go on that voyage without forcing everyone else to. Maybe a link to a tutorial, maybe an expert version of the software …

The important thing, though, is to concentrate on the user and their goals, instead of on the tool. 


Peter is at University College Cork, one of the better schools for usability in Europe

Friday, January 27, 2017

Any intelligent fool can invent further complications, but it takes a genius to retain, or recapture, simplicity. (E.F. Schumacher)

At work, most people tend to get rewarded for mastering the complex. Think of some internal system that only Joe knows how to use, some bureaucratic process that only Cheryl can understand, some hardware that only Trey can fix. Honestly, I’m pretty sure it’s behind why lawyers, accountants, and engineers all make the big bucks.

Unfortunately, for us UX types, it’s just not enough. Sure, the developers can get away with mastering C++; the lawyers with Reg this and Reg that; and project management with some unwieldy, macro-infested, homegrown spreadsheet horror. For us, though, we typically have to take all that complexity and turn it into something that our users can deal with and make sense of.

Thus, we often act as translators. So, not only do we need to learn that difficult source language of technology and bureaucracy and regulation, but we also have to translate all that into the target language of our users. 

Our effort is two-fold. First, we need to master the complex. Then, we need to turn that complexity into simplicity. 

Over the years, I’ve noticed that some UXers are great with that first part, but not with the second. To me, they’ve always seemed like frustrated techies (wolves in sheep’s clothing, if you will). Subsequently, their designs can often be great for themselves – and other techies – but maybe not so much for everybody else.

On the other hand, it’s hard to be a graphic designer without mastering PhotoShop, or an IA without being an Axure wizard, or a writer without knowing your content management system inside and out. What happens when you don’t?  Well, you might very well come up with user-friendly solutions, but you might also have a hard time translating those solutions into something workable. Heck, you might not even be able to fully grasp the complexity of the problem you’re trying to solve from the get-go, leaving out important pieces and ultimately making your solution harder, not easier, to use.

Face it, UX is one of those both-sides-of-the-brain disciplines. If your brain is structured that way, you’ll get a major kick out of both understanding the complex and then turning that it into something simple. If not, though, I can guarantee that at least one side of that equation is going to bug the heck out of ya.


E.F. Schumacher was an economist and statistician, 
but was also the author of Small Is Beautiful

Thursday, January 5, 2017

Instead of assuming that people are dumb, ignorant, and making mistakes, assume they are smart, doing their best, and that you lack context. (Nicholas Zakas)

Actually, I sometimes like to think that it’s the designer (or developer, or client, or HIPPO) who is dumb & ignorant. Needless to say, I also keep that strictly to myself.

Those are definitely my thoughts, though, whenever I hear someone put forth the traditional, “Where do you get these idiots from?” (or something along those lines). How I actually do respond is to point out these are our users, we used a 10-page screener and paid a recruiting agency $1000 to get ahold of them, and that not everyone out there is as smart and tech-savvy as you guys. 

So, that usually takes care of the “smart” part. As for the “doing their best,” we sometimes do have users who are just there for the money, but that’s extremely rare. It’s usually totally obvious to anyone observing that 99 out of 100 users are taking things seriously and are genuinely engaged.

Now, as for “context” … Hopefully, the design team had some exposure to that beforehand. Personas, journey maps, and all that other great upfront research can give the team some real feel for their users – what they do and don’t know, what they like and don’t like, what their goals and fears are – and how to design something just for them.

Even if there has been that exposure, though, I try to push testing as an excellent way to get even more context. Even the best upfront research can be incomplete, or neglected, or misapplied. Testing, though, is the chance to really check things out, to get that final word. The more sophisticated teams I work with have no problems understanding that, and often see testing in this regard as simply fine-tuning.

It’s those teams who don’t do any up-front work, and who can be totally blind-sided by things that happen in the lab, that I really worry about. Hopefully, though, these teams can use that experience to learn to emphasize with their users a little more – heck, maybe even do a little of that up-front research and avoid those uncomfortable situations in the first place.


Just in case you were wondering what a HIPPO is