Tuesday, March 22, 2016

Don’t fear mistakes. There are none. (Miles Davis)

I feel for my UX teams – the IAs, the IDs, the writers, the graphic designers, even the coders. They put their hearts and souls into their designs. It must really be hard to watch someone rip it apart or fail miserably trying to actually use it. And that must be especially difficult when you have to simply sit there and watch, with a thick piece of one-way glass between you and that user.

Something similar happens during any usability test report-out. Reports tend to simply report the findings – and not point fingers – but it still must be hard to hear. 

Now, personally, I do try to do some things in my reports that can help the team feel a little better. Probably the most important is to make sure there are positive results (ones to celebrate) as well as negative ones (ones to fix). 

I also do something similar during actual testing as well. Like most usability engineers (at least I would hope), I try to debrief after every user. These things – at least for the first few users – tend to be a little awkward. One way I’ve found effective to break the ice is to offer a few observations about things that worked well. That usually gets the ball rolling, and the team will naturally move on to the things that didn’t work so well.

A lot, however, depends upon the individual. Some people are just a lot more sensitive than others. I find that sensitivity is especially the case for newer team members and for those who are simply new to usability testing or UX in general. (Very experienced designers can’t wait to get into the lab. Their philosophy is generally, “Bring it on!”)

In fact, I’ve actually noticed that these newer team members often go through something not unlike Elizabeth Kubler-Ross’s Five Stages of Grief. The first step, for example, is usually a combination of Kubler-Ross’s first three – denial, anger, and bargaining. For example, observers might ask, “Where do you get these users from?” Or they might focus on issues with the prototype or with questions and carping about methodology. The best way to handle observers at this stage is to just get them to not over-react (even if this means allowing them to vent a little) and to make sure that they come to some more tests. 

A second stage – after a number of users or even after a number of tests – is often something not unlike depression. When that happens, I try to be supportive. I might, for example, point out what worked well or some easy fixes. Finally, though, the observer reaches acceptance. And at that point, they are probably pretty well sold.

One thing that I like to tell observers wherever they are on their journey is that testing their stuff and incorporating the feedback is a real feather in their cap. I also go on to say that not everyone gets a chance to get their stuff tested, and that their willingness to do so really separates them from the average IA, ID, writer, graphic designer, or even coder.


One of my all-time faves

Tuesday, February 23, 2016

Just because no one complains doesn't mean all parachutes are perfect. (Benny Hill)

Does your organization make use of voice of the customer (VOC) systems? These typically combine traditional ways of getting customer feedback like surveys, give feedback links, and so on and display them all in one system. They usually try to automate everything as well, using things like textual analysis and machine learning. They also tend to feature lots of cool graphs, a snazzy dashboard, and all sorts of bells and whistles.

Personally, I think they’re wonderful. Hey, it’s all feedback, right? 

At the same time, though, I have run into a number of people over the years who seem to rely on this kind of feedback almost exclusively. Now, the various methods that make up a VOC system do have a number of strengths (numbers is always at the top), but they also have a number of drawbacks as well. 

So, what are some of those problems? I see three issues on the user’s end:

Knowing – First of all, the user needs to know that there’s a problem. In the lab, I often see users who think they have completed a task, but who actually have some steps remaining. I also sometimes see users complete another task by mistake, but be totally unaware that everything isn’t just peachy-keen. 

Another issue is work-arounds, a special problem for experienced users. They may be so used to doing things a certain way, they may not even be aware that their experience might have some issues, let alone complain about it.

A special issue for surveys is human memory. There is often a major time lapse between when a user has an experience and when they get surveyed. The chance of them remembering specific details can often be very low.

Articulating – Second, the user has to articulate the problem. Note this is not as trivial as it may seem. Believe me, I’ve been doing this for 30 years, and I still struggle figuring out exactly what went wrong in some instances during a test. Is this an example of poor cognitive modeling, Fitt’s Law, progressive disclosure, skeuomorphism? Now, imagine some grandma from Iowa trying to do something similar.

What you often get instead are very non-specific comments. Just as an example, it truly is amazing over the years how many times I’ve seen my particular favorite, “This sucks!” Not a lot of actionable information on that one, huh? (Just as an aside, one major strength of usability testing is that it allows follow-up to these vague responses.) 

Caring – Finally, the user has to care enough to share what they think with you. And that’s where those traditionally low response rates come from. In fact, would you believe that some companies are actually happy if they get a rate of 0.5%? Wow, how representative can that be?

So, who does fill out these things then? Well, there is typically a fair amount of self-selection going on. You might get haters, or fan-boys, or the terminally cranky and hard to satisfy, or the desirous to please. 

And that too is another benefit of testing. Though a test almost always involves small numbers, you do know what every one of those users thinks or would do – even if they would never respond to a survey or click on give feedback.

A final issue with caring is what I would call a threshold issue. In other words, with a VOC system, you’re probably going to get a lot of highs and lows. If, however, something was not a huge disaster – or, alternatively, a life-changing experience – it’s probably not worth reporting. 

In fact, you might well run into the death-by-a-1000-cuts syndrome. Just imagine several million users who have a couple of lower-level issues every time they log in to your system, but never enough to actually complain about. Now, imagine another similar system that comes along and doesn’t have any of those low-level issues. Imagine, further, that all those users leave you for that system overnight. What would you then have in hand that would give you any idea why that happened (or – even better – that something like that might be going to happen in the near future)?

On the opposite end of the spectrum, you can get something akin to Benny Hill's parachutes. In fact, one of my favorite clips of all time was when I was doing a test on a production system. At the end of a particularly trying task, a survey popped up. If I can remember correctly, the user said something along the lines of, "If they think I'm going to fill out their %#$@ survey after that @*#$% experience, they've got another &@^#$ thing coming."

In sum, VOC systems are wonderful, but they can involve their fair share of missed signals and false alarms. To make sure they are more than a glorified suggestion box, it can be helpful to triangulate their findings with other sources of data – web analytics, call center data, and ... even usability testing. 


Benny Hill, famous comedian. ladies man, and usability guru

Tuesday, February 2, 2016

Instructions must die. (Steve Krug)

I started out as a tech writer. I used to write manuals that would tell users how to use computer systems – hundred and hundreds of pages of instructions.

In fact, that’s how I got into usability.  The initial scenario would involve me going back to my developers and telling them that some particular feature took four of five pages to explain. 

“Could we make that a little simpler? What if we moved that bit to the next page, and got rid of this thing here? It might make more sense that way too, right?”

Over the years, developers started bringing me things to look at before I wrote them up. From there, it was a small step for asking for my input upfront, to letting me design a few things on my own – to even doing a little usability testing.

Now, that was a long, long time ago (we’re talking the ‘80s here, folks). It’s kind of strange how that instruction thing is still around though. 

Now, it’s been a long time since I saw – let alone worked on – a manual. What I’m talking about here, though, is something I often see in design or review meetings – basically, kind of a knee-jerk reaction to issues with fields or forms or pages to “just throw some instructions in there.” 

Now those instructions can appear at the top of the page, to the right of a field, under the field, in the field, in help, in FAQs, wherever … The real problem, however, is that nobody ever reads them.

And even if they do, they’re really just one more thing to add to the user’s cognitive load. Why can’t the action just be obvious on its own?  Why do we even need instructions? 

In Don’t Make Me Think, Steve Krug concentrates on boiling the instructions down. There are plenty of instances, though, where doing a little thinking can eliminate instructions altogether. 

My favorite example is probably the date or telephone number or Social Security number fields that won’t accept delimiters (you know, the / or -). Just strip ‘em out on the back end, and you can kiss those instructions goodbye.



Thursday, January 14, 2016

Happy talk must die. (Steve Krug)

Sometimes, I joke with my writers that they must get paid by the word.

Actually, for some of them, that’s not that far from the truth. A lot of writers for the web wander over from more traditional, print-based media – newspapers, magazines, PR … And, in those fields, writers do have to deal with something called a “word count.” In other words, there is a set amount of words they have to produce – even if they might not have all that much to say.

So, the first thing all these writers who are new to the web have to deal with is the fact that – I hate to break it to ya, fella, but – no one really wants to read your stuff. I’m sure that’s an incredible blow to the ego. It almost, though, seems like a rite of passage to have to come to terms with that fact.

There are two ways that last bit happens. Probably the most effective is to watch a few usability tests. That masterful opening paragraph that you spent hours on and are quite pleased with? Well, it looks like only 9 of 10 users actually read it. And, of the 9 who didn’t, did you hear their quotes? Did that one guy actually say, “Blah blah blah”? He did!

If that doesn’t work, I usually bring up all the research that shows that people really don’t like to read online, and why that is. My particular favorite is the seminal work that was done at the Nielsen Norman Group.

Nielsen Norman also, however, point to some real solutions as well. Now, these are simple things such as:
  • Using lists
  • Bolding keywords
  • Breaking up paragraphs

What’s really great about these methods is that they support the way most people read on the web, something known as “scan and skim.”

The main point I try to get across, though, is that those big, grey blobs that you were been rewarded for in previous lives? I hate to break it to ya, but they just ain’t going to work here. 


I’m pretty sure this is not what Steve was talking about
(Happy Talk is a song from South Pacific,
here covered by English Goth Punk band The Damned)

Monday, January 4, 2016

Things that behave differently should look different. (Alan Cooper)

There are two things going on here. First, there’s affordances. That’s just a fancy way of saying what a thing does should be obvious from the way it looks. An affordance is just the little thing that tells you what that thing can do.  

Think of a door. If it has a push plate, it tells you you have to push the door to open it. If, instead, it has something you can grab, that tells you you’re going to be pulling this one toward you. And if it has a fairly standard-looking handle, that means that you need to grab and turn it before you can do anything else.

In a more digital context, radio buttons say, “Click one of me.” Checkboxes say, “Click as many of me as you like.” Sliders say, “There’s more here.” Links say, “I’m going to take you to a new page.”

That last one is actually a good example of what Cooper is talking about here. A very traditional standard for links is blue and underlined. A lot of sites, however, get a little creative. They might, for example, ditch the underline, or use a different color, or both. In that situation, though, it’s much easier to be confused by the link’s affordance. A user might, for example, confuse a link with bolding, or a title, or nothing in particular, or what have you.

The other thing going on here is consistency. In fact, a typical corollary to this quote is, “Things that behave similarly should look similar.” 

Now, the whole point of consistency – and standards, which help deliver that consistency – is reducing cognitive load. In other words, don’t make me think! So, if users have already learned a particular affordance elsewhere – on the web, on your site, in life in general – they don’t need to learn something new. 

Just to make this concrete, I once tested a mobile app that had a lot of inconsistency. Interestingly, though, this was mainly an issue of location. For example, the Submit button was on the bottom right, the bottom left, the top right, and the top left. The app was also pretty inconsistent when it came to terminology. Submit, for example, might be “Submit,” or “Done,” or “Complete,” or just-see-your-thesaurus.

So, it’s really not just about affordances. There are actually all sorts of ways to be inconsistent.


And if you have a door like this,

Wednesday, December 9, 2015

Fall in love with the problem, not the solution. (???)

I’ve actually seen this one in so many places, and attributed to so many people, that I’m a little leery of ascribing it to any one person. 

But what does it mean? Well, one thing that I see a lot these days is teams that are trying desperately to be innovative. Oftentimes, what this turns into is something along the lines of being innovative just for innovation’s sake. They might, for example, focus solely on some piece of technology (the Apple Watch, say), or some new style (e.g., Flat Design), or some new function (multi-touch gestures?), or some new method (gamification, anyone?). Whether that actually does something for their users, whether that actually solves a real user problem, seems to kind of get lost in the shuffle.

What these designers don’t realize is that, one, users really don’t care. Users typically just want to get their task done. If that involves all sorts of wild and crazy stuff, fine. If it simply involves boring things like tabs and radio buttons, well, they’re fine with that too.

What these designers also don’t realize is that, if they would only focus on the user’s actual problem, they might end up being very innovative indeed. In fact, a typical follow-up to the above quote is “… and the rest will follow.” What designers really need to understand is that all that cool stuff that they often fall in love with is simply a means to an end. 

So, how to identify, and focus on, those user problems? Well, I’ve always been a big fan of ethnographic research (also known as field studies). This method looks at users in their own context (the office, a retail store, their car, the couch at home), doing their own tasks, with their own goals in mind. That way, you can identify what’s working, what’s not working, the pain points, the gaps (and that involves the user’s whole experience, not just their interaction with computers, by the way). Next, all you need to do is sit down and analyze all the data that results (good old-fashioned affinity diagramming is my favorite way to do this). You can then brainstorm – and innovate – ‘til the cows come home.


Though I really couldn’t find a source for this quote,
a lot out there seems to point to these guys
(I'm not surprised)

Thursday, December 3, 2015

The point of testing is not to prove or disprove something. It’s to inform your judgement. (Steve Krug)

Unfortunately, there are a lot of people out there who want that proof. And the way they typically look for it is through numbers.

So, first, you’ve got your academics. They will be “purists,” and will insist on statistical significance and p-values and stuff like that. Next, you’ve got your marketing types. They’re into stats too. Finally, you’ve got your business folks. Once again, numbers types. 

So, the first thing I have to do is share that I’m actually not going to be giving anyone any real numbers (or at least not the kind of numbers they’ll be expecting). Then, I have to convince them that that’s not necessarily a bad thing. Finally, I have to break it to them that, yes, they will actually have to make some tough decisions (but much less tough than if they had nothing else to go on).

In accomplishing all that, the first thing I talk about is that usability testing necessarily means qualitative data. Now, these folks typically have some familiarity with that – e.g., through focus groups – so I always make sure to reference those. From there, I then go on to talk about trading numbers for richness. In particular, I like to point out that one great thing about a usability test is that you don’t have to guess, or impute, the reasons behind user behavior. Users will tell you exactly why they didn’t click on that button, why they preferred design A to design B, why they abandoned their shopping cart … And that can be pretty valuable stuff in coming up with buttons that they will click on, or designs that they will want to use, and shopping carts that they won’t abandon ...

Another thing that I point out is that usability testing focuses less on user preference, and more on whether something works or not. Note, though, that this does not mean QA. A system can be completely bug-less but still be impossible to use. Misnaming things, putting them in the wrong menus, burying them in footers, and so on can be just as effective in stopping a user in their tracks as a 404 page.

(And, yes, you really do need numbers for preference issues. Think of what goes on into deciding whether a feature should be added to your software. How many people would want it? How many of your main user base? How badly? Usability testing really should come after that decision, and focus on whether users can actually use the feature.)

Finally, though, I simply state that I am not calling the shots here. All I am doing is providing information. Executives may have very good business reasons behind somewhat dicey design decisions. All I want to do is make sure they know all the implications of those decisions. And what I’ve often found is that executives may not even be aware that those design decisions may result in a somewhat dicey user experience, or how dicey that experience may be. But after doing testing, well, they really don't have any excuses now, do they?

Steve’s alter ego