Wednesday, December 13, 2017

If you have no critics, you’ll likely have no success. (Malcolm X)

I never thought I’d be quoting Malcolm X in a blog about usability. In fact, I originally had no idea I was quoting him. I had found this quote in a fortune cookie, of all places. It was only when I Googled the quote that I found that it was from the father of Black radicalism.

It is, however, a quote that can be applied to many different fields. Within usability, I see it applying in a number of situations.

Most obviously, this sounds like something I’d share with my project teams. I’ve found, over the years, that good designers can’t wait to get real user feedback. They tend to have thicker skins, and can roll with the punches. And I like to point out, and congratulate them, on their ability to do so. 

Needless to say, we can also turn the tables on ourselves. Once again, I’ve found that the better usability engineers are the ones who are always learning things and looking for a better way to do their jobs. They tend to practice what they preach, and let humility be their guiding principle. But this probably just comes from being a social scientist. I’ve found that, in every test I run, I learn something new – whether that’s about computers, people, or myself.

Finally, I think this maxim applies to acceptance of usability in general. Having been in this profession for 30 years allows me to take the long, historical view. In particular, I remember way back when the resistance came from the techies (a time which Alan Cooper so brilliantly captured in the title of his book The Inmates Are Running the Asylum). 

Next, it seemed to come from graphic designers. Though they did us a service by adding more pleasure and delight to the user experience, they tended to divorce those considerations from more practical considerations like usability and company profit. Jakob Nielsen, for some reason, seemed to be the brunt of a lot of their disapproval.

Lately, it seems resistance is coming from the business side. In particular, I worry that all a site is these days is a sales or purchase funnel, where users are hustled along without any time to explore or ask questions – to me, at least, the online equivalent of a used car lot. Business does have the numbers on their side – with analytics, big data, and A/B testing – but do I worry that they sometimes may be missing the forest for the trees.

And then there's Agile ... Not only does this seem like the return of the techies, but it seems like, this time, they've teamed up with the business side against us.

Ah well, it’s always something, or somebody, isn't it? I actually think the dialectical nature of all this is good for usability. It shows that we can adapt, incorporate other viewpoints, and even act as a mediator sometimes.

 



And you thought I was making that up

Thursday, November 9, 2017

Web users ultimately want to get at data quickly and easily. They don’t care as much about attractive sites and pretty design. (Tim Berners-Lee)

Wow! I never thought I’d be disagreeing with Sir Tim Berners-Lee. But there you go …

Now, there was a time when I would have heartily agreed with him. And my guess is that this quote is probably from a long time ago as well.

Yup, it’s hard to believe, but there was a time when information was paramount. The Internet was simply a place where you went for data, plain and simple. I think the idea was that having all the information in the world at your fingertips was enough. Anything that could possibly get in the way – and that included design and aesthetics – did indeed get in the way, and really shouldn’t be there.

(You can still see that approach in sites like Craig’s List, the Drudge Report, and even Jakob Nielsen’s Alertbox. In fact, Jakob just put out an article on what he calls “brutalist” web design – borrowing the term from architecture – discussing this very style.)

Fortunately or unfortunately, something called eCommerce then occurred. Yup, companies started using the Internet to sell things. (It wasn’t just random people and organizations dumping data there anymore.) Further, those companies wanted to use all the methods at their disposal to convince you to buy their stuff. 

At the same time, web coders started introducing all sort of techniques that could make websites less just pure HTML and more along the lines of print or TV or whatever the client desired. Finally, there were also studies done that showed that users preferred more aesthetically pleasing designs. In fact, comparative testing of the same site in a “brutalist” style and a more aesthetically pleasing one actually made users think that the more aesthetically pleasing ones were more usable as well.

The age of the graphic designer was at hand (and, honestly, we’ve never really looked back). Unfortunately, some of those graphic designers got a little carried away. In fact, given free rein, these folks went a little over board, and started making sites that were, not only attractive, but “different,” “fresh,” and even “challenging” as well. In other words, the design had ceased to help the user and business achieve their goals, but interfered with those, becoming something of an end in itself.

Now, here’s the thing …Usability and aesthetics need not be in conflict. In fact, a really great design will have them working hand in hand, seamlessly, to help the user and the business (if not necessarily the graphic designer) meet their respective goals as efficiently and effectively as possible.


Hard to believe, but I think that screen was Photoshopped!

Monday, October 9, 2017

The most valuable question could be the one that’s not asked. (Anonymous)

And that’s why I’m not so crazy about surveys.  Sure, you can include an Other field and, yes, you can test your survey out beforehand.  Heck, you can even do some qualitative research (e.g., a focus group) upfront to help guide the creation of your survey.  But there will still be plenty of things you miss, par example:
  • Respondents may not be able to express their issues in words.  Believe me, “This sucks!” and “I hate this!”  are not going to provide you with a lot of actionable data.
  • They may also have gotten so used to work-arounds that they might not think to ever bring up the issue the work-around is for.  Along those lines, they may have also no idea that something could even be better.  
  • More importantly, they may simply fall prey to the limits of human memory, not remembering, for example, that terrible issue that came up last month and that they’ve successfully managed to totally repress since then.  
  • Finally, you simply cannot assume – no matter how diligent you’ve been – that you have encapsulated their whole universe.  Unless you are a user yourself – and your team, further, represents every persona for that user out there – there will still be plenty of things you just can’t possibly foresee.

How to get around this problem?  Well, why not go straight to the users themselves?  In particular, why not let the user tell you or, even better, demonstrate how they feel, think, and behave.  That’s where in-depth interviews, usability tests, and ethnography come in.

Yes, I do typically have a list of things I want to cover when I do these kinds of studies.  What I’ve found, though, is that, if it matters to the user, they’ll be certain to let you know.  And this will come, not from your asking about it, but from them volunteering – either free-form or prompted by some specific task they’re doing.

Now, if something does not come up – and my team really wants some feedback on it – I will probe.  I always tell my team, though, that this is really an instance of a bird in the hand being worth two in the bush.  In other words, if the user brings up or uncovers something on their own, that’s going to be a lot more valuable than if I force them to come up with a response.  

In fact, I usually tell my clients that there is a definite hierarchy of feedback, based on how the feedback came about:
  1. Independently, through a task
  2. Independently, through interviewing
  3. Through probing, but only at the highest level (e.g., “How did that go?”)
  4. Through probing a little deeper (“Any thoughts on page X?”)
  5. Through direct questioning (“What did you think of field Y?”)

Note that I would never go any further than that last instance.  In other words, “Did you understand what that was for?” “Is that link prominent enough?” and “Is that the best place for that button?” are simply leading the witness, and I won’t stand for them.

Now, do surveys have a purpose?  My, of course, they do.  If it’s numbers you’re after, surveys are up there with web analytics and A/B testing.  Note, though, that quantitative methods may all lack two very important things:
  • The ability to follow up
  • The ability to get at the real reason why

And that’s why I usually recommend triangulation – quantitative and qualitative, surveys and tests, analytics and interviews …  And, believe me, that should about cover it all.




Thursday, September 7, 2017

A wealth of information creates a poverty of attention (Herb Simon)

Information density is one of my favorite issues. I’m pretty sure it crops up on every test I run.

And the particular problem I run into is typically the one that Simon points out here. If I had a dollar for every time I’ve written that n users couldn’t find x, and that we should “consider simplifying the UI,” I’d be a (very slightly) richer man. 

It’s also pretty obvious where it comes from too. Marketeers and execs are famous for wanting to provide as many features as possible, but to also make sure that everything is just one click away. Faced with this kind of “logic”, I try to point out the total fallacy of the old one-click rule and the primacy of information scent, but it can sometimes be a struggle.

I also like to point out that simpler interfaces are just easier to use. I think, these days at least, most people do get that. I only need to point to Google, or apps, or Nest, or even Tindr to make my point. 

I also, though, like to get them to think about the user’s experience as a whole. And that means not just whether users see each marketer or exec’s particular pet feature. 

I try to get across that a little here, a little there amounts to something I like to call “additive design,” a guaranteed way to get a system or site or app to sink under its own weight. I try to get them to consider that, as Shlomo Benartzi has written, “every new feature comes with a cost.”

Finally, I also like to tie in what I call the simple, basic “yuck factor.” My users are great at providing me with ammunition for that. If I had a dollar for every quote where a user basically says “TMI!” …  Even better, I’ve found that these quotes are typically some of the most pithy ones I get.

What I really try to get across to my clients in this instance, though, is that this visceral reaction on the user’s part has some very real consequences. In addition to making things harder to find, it oftentimes make the user not even want to try, and just give up. And that - and just clutter in general - can have a major negative impact on brand perception. And we all know how worked up marketeers and execs can get about that.


Herb Simon was quite the guy. A Nobel Prize winner, he also won the Turing Award, and made major contributions to psychology and sociology as well. He’s also the father of the “think-aloud protocol,” the basis of pretty much every usability test that's run. I was lucky enough to have met him in grad school.

Wednesday, August 30, 2017

For every complex problem there is an answer that is clear, simple, and wrong (H.L. Mencken)

A/B testing sounds like a pretty darn good idea.  In fact, on the face of it, it seems to have some real advantages over good, old-fashioned usability testing:

  • Numbers – If you’re working on a well-trafficked site and on a well-trafficked part of that site, you will probably be getting results in the 1000s. Now, compare that to the 7 to 12 you might get on a usability test.
  • Genuineness – A typical beef with usability testing is that it’s artificial – the users are in a lab, being observed, and typically doing some role playing.  In A/B testing, though, they are actually using your site, to do things they want to do, and without ever knowing they are being “tested.” 
  • Relevance – A/B results are typically tied to specific results.  These, in turn, are tied to specific business goals – account openings, purchases, downloads.  In other words, this stuff can have a genuine effect on the bottom line.
  • Concreteness – There’s a joke among usability engineers that the profession’s motto should be, “It depends.”  We sometimes don’t realize how frustrating this can actually be, though, for our clients who are more business- or numbers-oriented. A/B testing, on the other hand, is definitely something that speaks their language. 

At the same time, however, A/B testing also has plenty of possible issues:

  • Comparative only – A/B testing is a comparative method only.  In other words, you need 2 things.  But what if you only have 1?  Does that mean you’ll have to come up with a straw man in that situation?  (Usability testing doesn’t have that liability, and can also be used for comparative testing as well.)
  • Finished product – Though A/B testing is great for getting feedback in real situations, that also means you can’t try something out before it gets to a polished, ready-for-prime-time state.  (In usability testing, you can get feedback simply using pieces of paper.)
  • Emphasis on details – A/B testing tends to focus on smaller issues, like button size or color or location.  Great stuff to know, but a pretty incomplete picture.  Who knows, maybe there were things that weren’t even considered that could bump up conversion even more.  How would you ever know?  (Usability testing looks at whatever causes users problems, whatever that might happen to be.)
  • Cumulative effect – Because A/B testing often means tweak this here, tweak that there, attention isn’t always focused on the overall or cumulative effect.  Yes, that marketing tile was effective on the home page in week 1.  And, yes, that call-to-action that was added in week 6 worked well too. Does that mean, though, that we can keep adding whatever we want as the weeks go on?  I am actually familiar with a site that did just that.  And, as is right now, it’s also ready to pretty much sink under its own weight.

  • Short-term focus – As illustrated above, A/B testing is often very much about short-term effects.  Now, that fits in very well with an Agile methodology (which often relies on A/B), but that approach might also backfire in the long run.  How, for example, will that cluttered homepage impact conversion down the road, or overall?  
  • Scalability – Along the same lines …  So, increasing the size of a button bumped conversion up by 2%?  That’s great!  So, why not just bump it up again?  In other words, how can we tell when we’ve passed the line of too much of a good thing?  Heck, why should we even really care?
  • Neutral results – A lot of the press around A/B testing tends to focus around dramatic results.  From what I understand from people who are really doing it, though, typical results tend to be more along the lines of an “eh” and a shrug of the shoulders.  Now, was all that effort and expense really worth it for that 0.01% move on the needle?  Even worse, what if both designs tested equally badly?  What other data would do you have in that situation to come up with something new and different?
  • Effect on standards – One particular kind of dramatic result seems to be when the findings break conventional thinking, or go against established standards.  Now, that’s pretty fascinating, but what exactly do you with it?  Does that invalidate the standard?  Is there something about this particular instance that would be a good example of “it depends”?  Do we need to test every instance of this particular design (which is what I’m thinking the vendor might suggest)?

  • What happens next – A/B testing focuses solely on conversion.  As I mentioned above, that can be a good thing.  At the same time, though, conversion doesn’t come close to describing the totality of the user’s experience.  What if the user successfully signed up for … the wrong account?  What if the user successfully purchased the product … but did so offline?  What if they downloaded that newsletter … but never read it?  What if they signed up for your service ... but it was such a painful process that they really don't want to come back? Unfortunately, siloed marketeers often don’t care.  Just getting the user “through the funnel” is typically their only concern.   How the whole experience might affect, say, brand is something else entirely.  To quote Jared Spool, “conversion ≠ delight.”
  • Unmeasurables – Those brand concerns above hint at this one. Note, though, that unmeasurables can be a lot less lofty as well.  I, for example, do a lot of work with online banking systems.  Now, these people are already customers.  What key business metrics might they align with?  Actually, they typically have their own, very diverse goals.  They might be all about checking their balance, or looking for fraudulent transactions, or paying their bills on time.  All we really can do is support them.  Indeed, there are tons of sites out there that are more informational or that involve user goals that are vaguer or might not directly align with business KPIs.
  • Why – Perhaps the most important drawback with A/B testing, though, is a lack of understanding why users preferred A over B.  I mean, wouldn’t it be nice to learn something from this exercise so that you can apply it, say, in other situations?  Honestly, isn’t understanding why something happened a necessary precursor to understanding how to improve it?  Unfortunately, A/B testing is all a bit of a black box.  

A/B testing is basically only binary feedback.  You essentially get a thumbs-up or thumbs-down.  But perhaps there’s more to it than that.  Perhaps it does after all depend …


H.L. Mencken working on some early man-machine issues

Tuesday, June 27, 2017

Information is gushing toward your brain like a fire hose aimed at a teacup. (Doug Adams)

I think we can all identify with this one. I mean, honestly, how many emails do you get in a day?

The real question, though, is whether you can identify with your users in this regard. How complex is your website, or your app, or your software?  Does your system simply add to that information? Are you part of the problem? Or have you made an effort to be part of the solution? 

Now, as usability engineers, we have no shortage of opportunities to help out our company’s users in this regard. We also, however, need to remember our own users – the project teams we run tests for. In particular, there are a number of things we can do to make our primary deliverable to them – our reports – more effective. 

Our main goal here should be to take the fire hose of information that is the result of a usability test and turn it into something more like a tea cup’s worth. Now, the first thing to realize about this is that it is not easy. It can be very tempting to simply refashion that fire hose worth of stuff. It takes a lot of work to figure out what the flood of information actually means and boil it down to something a lot more consumable.

One thing I’ve found very useful is something an art director at a former employer shared once in a presentation about how to create better presentations. I forget the exact numbers, but it was something along the lines of 10-20-30 – meaning 10 pages, 20 words per page, and 30-pt font. I do pretty good with the last two, but found that I did have to double that first one. That said, when I subtract all the titles pages, appendices, and background info, it actually does come pretty close to the 10.

Here are some other things you can do to make your reports more easily processed:

  • Include an executive summary
  • Separate out methodological stuff and boil it down as much as you can (other than other usability engineers, very few people seem to care)
  • For results, try to include only one idea per slide
  • Use a simple, distinct, consistent layout – make it easy for readers to identify your finding, evidence (quotes, pix), and solution

Whatever you do, just be sure you can never be accused of do what I say, not what I do. 


Scott Adams, creator of Dilbert, always had a particular appreciation for usability problems

Friday, May 12, 2017

A foolish consistency is the hobgoblin of little minds. (Ralph Waldo Emerson)

Now, this one is another maxim for more advanced clients (as is this one). The challenge for less experienced clients is to recognize that consistency is actually quite a good thing. They’re the ones more likely to have a blue button on one page and an orange one on another, to have title case for one field and sentence case on the next, to have the call to action at the bottom left on one screen and the bottom right on the following one …

Unfortunately, once they really learn the value of consistency, these folks now have to start dealing with exceptions to the rule. Note, though, that this is a good problem for you to have. A rigidly consistent, even overly consistent, system is a ton better than a system that is just total chaos.

As an example of the former, I just had to deal with a consistency problem involving menus. My company offers multiple products – bank accounts, mortgages, car loans, etc. Our menus for each of these is typically a list of the actual products, but also some subsidiary and educational material as well, with these three areas marked out as sub-menus.

We are currently getting into an additional product area, and came up with a menu that includes 1 actual product, 1 bit of subsidiary material, and 1 tutorial. Needless to say, this is quite different from our other product areas, which typically include multiple examples of each type. Even so, some of the team wanted to still use the same organizing scheme – which basically amounted to three one-item lists.

Another common problem is parallel lists – i.e., lists whose items are all similar. I’m a huge fan of these, but sometimes it just doesn’t work. For example, you might have a menu with a bunch of tasks. It makes a ton of sense, then, to make these all start with action verbs – “edit,” “delete,” “create,” etc. Sometimes, though, these menus will also include some support material. But changing that from, say, “help” to something like “Getting Help” or “Understanding how to use …” is just wordy, unnecessary, and possibly confusing.

So, here are some things you can ask the overly consistent to get them to focus on, not just following the rules, but knowing when to break them as well:

  • Are these things truly the same? 
  • Is there a good reason to make this one stick out a little more? 
  • How will the user perceive them?
  • What do we accomplish by making these consistent?
  • What confusion, awkwardness, or other unintended consequences might we cause as well?

By the way, though, do not use this quote! I have a funny feeling that might not go over that well.


I’m thinking this one might have been Photoshopped