Wednesday, December 9, 2015

Fall in love with the problem, not the solution. (???)

I’ve actually seen this one in so many places, and attributed to so many people, that I’m a little leery of ascribing it to any one person. 

But what does it mean? Well, one thing that I see a lot these days is teams that are trying desperately to be innovative. Oftentimes, what this turns into is something along the lines of being innovative just for innovation’s sake. They might, for example, focus solely on some piece of technology (the Apple Watch, say), or some new style (e.g., Flat Design), or some new function (multi-touch gestures?), or some new method (gamification, anyone?). Whether that actually does something for their users, whether that actually solves a real user problem, seems to kind of get lost in the shuffle.

What these designers don’t realize is that, one, users really don’t care. Users typically just want to get their task done. If that involves all sorts of wild and crazy stuff, fine. If it simply involves boring things like tabs and radio buttons, well, they’re fine with that too.

What these designers also don’t realize is that, if they would only focus on the user’s actual problem, they might end up being very innovative indeed. In fact, a typical follow-up to the above quote is “… and the rest will follow.” What designers really need to understand is that all that cool stuff that they often fall in love with is simply a means to an end. 

So, how to identify, and focus on, those user problems? Well, I’ve always been a big fan of ethnographic research (also known as field studies). This method looks at users in their own context (the office, a retail store, their car, the couch at home), doing their own tasks, with their own goals in mind. That way, you can identify what’s working, what’s not working, the pain points, the gaps (and that involves the user’s whole experience, not just their interaction with computers, by the way). Next, all you need to do is sit down and analyze all the data that results (good old-fashioned affinity diagramming is my favorite way to do this). You can then brainstorm – and innovate – ‘til the cows come home.


Though I really couldn’t find a source for this quote,
a lot out there seems to point to these guys
(I'm not surprised)

Thursday, December 3, 2015

The point of testing is not to prove or disprove something. It’s to inform your judgement. (Steve Krug)

Unfortunately, there are a lot of people out there who want that proof. And the way they typically look for it is through numbers.

So, first, you’ve got your academics. They will be “purists,” and will insist on statistical significance and p-values and stuff like that. Next, you’ve got your marketing types. They’re into stats too. Finally, you’ve got your business folks. Once again, numbers types. 

So, the first thing I have to do is share that I’m actually not going to be giving anyone any real numbers (or at least not the kind of numbers they’ll be expecting). Then, I have to convince them that that’s not necessarily a bad thing. Finally, I have to break it to them that, yes, they will actually have to make some tough decisions (but much less tough than if they had nothing else to go on).

In accomplishing all that, the first thing I talk about is that usability testing necessarily means qualitative data. Now, these folks typically have some familiarity with that – e.g., through focus groups – so I always make sure to reference those. From there, I then go on to talk about trading numbers for richness. In particular, I like to point out that one great thing about a usability test is that you don’t have to guess, or impute, the reasons behind user behavior. Users will tell you exactly why they didn’t click on that button, why they preferred design A to design B, why they abandoned their shopping cart … And that can be pretty valuable stuff in coming up with buttons that they will click on, or designs that they will want to use, and shopping carts that they won’t abandon ...

Another thing that I point out is that usability testing focuses less on user preference, and more on whether something works or not. Note, though, that this does not mean QA. A system can be completely bug-less but still be impossible to use. Misnaming things, putting them in the wrong menus, burying them in footers, and so on can be just as effective in stopping a user in their tracks as a 404 page.

(And, yes, you really do need numbers for preference issues. Think of what goes on into deciding whether a feature should be added to your software. How many people would want it? How many of your main user base? How badly? Usability testing really should come after that decision, and focus on whether users can actually use the feature.)

Finally, though, I simply state that I am not calling the shots here. All I am doing is providing information. Executives may have very good business reasons behind somewhat dicey design decisions. All I want to do is make sure they know all the implications of those decisions. And what I’ve often found is that executives may not even be aware that those design decisions may result in a somewhat dicey user experience, or how dicey that experience may be. But after doing testing, well, they really don't have any excuses now, do they?

Steve’s alter ego 

Friday, November 20, 2015

Every new feature comes with a cost. (Shlomo Benartzi)

One of my favorite bugaboos is something I like to call “additive design.” This is where your original design – whether for a website, an app, some software, or whatever – starts out nice and clean and simple.  Gradually, however, things get added to it, and added to it, and added to it … until it falls apart.

Now, this can happen after rollout … or before. For the former, this is just a typical evolution over time. There are new features, or new content, or new audiences, and those need to be addressed somehow.  Unless the possibility of additions like these were considered beforehand – i.e., unless you sought to make your design scalable from the get-go – these things will tend to just get tacked on, oftentimes rather willy-nilly.

Something similar can also happen before the product even sees the light of day as well. In this case, though, it’s usually a failure on the design team to prioritize, or to push back, or to keep the big picture in mind.

Now, what really gets me about this sort of thing is the fact that only rarely do people ever even bat an eye when it come to this stuff. In particular, no one ever seems to wonder if, by adding whatever-it-happens-to-be, there would be any particular effect on what’s already out there, or on the user’s overall experience.

And it’s not just the gestalt of the thing. As an example, edge cases often come up in reviews (from legal, risk, compliance, IT, QA …). Now, that’s a topic in itself, but what the typical solution is is often to add some help, or put in an extra radio button, or throw in a link, or add another option to the menu.

And what that can lead to is a screen that was formerly nice and simple and easy to process becoming one that is just a big jumble. I realize the team’s heart is in the right place, but they just don’t realize that all that “help” might come as a cost. Perfect example of unintended consequences.

To be a little concrete here … I once worked on a project team that took a simple date field and turned it into 2 date fields, some instructions, and a help link. And all of that stuff was simply to address what might happen – i.e., edge cases. Sigh …

As usual, Nielsen Norman does a much better job of explaining this than I ever could. Check out their analysis of those new-fangled soda machines right here.


Shlomo's not really a usability guru, but does seem to have developed some appreciation for the field in his book The Smarter Screen

Wednesday, October 14, 2015

Your site is not an island. You need to fit in with the rest of the web. (Keith Instone)

I am a firm believer in standards. My arguments in favor of them are typically two:
  1. Somebody else might have thought about this before
  2. Users might actually have gotten used to doing things a certain way

As a result, one of the things I love to do on any project is a quick competitive eval. I just go out and see what all our competitors are doing. This gives me a good sense of:
  • Whether there are any standards
  • How strong they are
  • Whether it would make any sense to break them

I’m always amazed, though, at how many people simply can’t wait to break standards. They usually cite innovation, and creative disruption, and whatever buzzword happens to be current.

Actually, I’ve come to the realization, over the years, that this may have more to do with personality than anything. If you’re familiar with Myers-Briggs, I’m a (weak) S. S’s tend to be more practical, down-to-earth, and data-driven. The people who I butt heads with tend to be (strong) N’s. They tend to be more abstract, intuitive, and full of ideas. 

I typically handle my N colleagues by getting them to:
  • Focus on higher-level issues, and less on details. For example, radio buttons are a pretty darn good way to tell a user that he needs to make just a single-choice. There’s really no need to reinvent this particular wheel. Developing a wizard to help a user pick the product that’s right for them? Now, there’s something that might actually add some real value.
  • Solve real problems rather than simply coming up with random new ideas. A colleague of mine likes to make the distinction between innovation (the former) and mere invention (the latter). 

By the by, before you can solve real problems, you have to identify them. And I’ve always been a big fan of ethnography when it comes to that.

Keith may be most famous for his work in getting going
the IA Summit, UXnet, and World Usability Day
(oh, and those glasses)

Thursday, October 1, 2015

One swallow does not a summer make. (Aesop)

This one is actually posted in the observation room of my usability lab. It’s a subtle, tasteful reminder that you might want to consider coming to more than just that one usability test, Mr. or Ms. Team Member. 

Yes, I know you’re busy. Yes, I realize that usability tests are a little bit like baseball games – lots of boredom punctuated by rare, brief flurries of excitement that are easy to miss.

You do realize, though, that that one user you saw may not be totally representative of all 10 I will be bringing in this week, right? In other words, if you happen to be at the very first session, there’s really no need to completely redesign the system (or get all defensive or jump off the roof) before the 10:30.

And when it’s time for the report out, do please be a little circumspect when you’re tempted to talk for 10 minutes about the one user who had that one problem that – hate to break this to you – nobody else actually had. The rest of us did see that person, saw a whole bunch of other people as well, and determined that that original user might just be an outlier.

Yup, that’s right. The rest of the team actually attended most of the tests. And, as a matter of fact, I personally happened to attend all of them. In addition, I was paying attention the whole time as well. Finally, I spent probably twice as much time reviewing my notes, looking at the tapes, trying to figure out what it all meant, and putting it all in a form that you could easily digest.

Now, don’t get me wrong. I’m really happy you could make that particular session. And, no, I am not taking attendance. It’s just that the more users you see, the more you’ll understand and the more refined your subsequent judgments will be. No, you don’t have to attend every darn one. Heck, 3 or 4 might be enough to give you a good idea whether what you’re seeing is representative or not. 

(By the way, I actually have not found this to be a real problem for the actual members of the project team. Interaction designers, information architects, writers and even graphic designers are usually there for the duration. It’s often the managerial or business types who are guilty here. And that’s okay. In general, if these types take my report seriously and act on the findings, it’s not totally essential that they see it with their own eyes.)




And, no, Aesop was not blind. I understand 
the sculptor just had trouble “doing eyes.”

Friday, September 11, 2015

It doesn’t matter how many times I have to click, as long as each click is a mindless, unambiguous choice. (Steve Krug)

If there’s one thing that marketeers understand about the web it’s that the more clicks it takes a user to get to their stuff the worse shape the world is in. To put that as a formula, I guess it would go something like, “clicks = evil.” 

Okay, I’m joking. Seriously, the adage is usually something along the lines of, “If users have to click more than 3 times to get what they’re after, they’ll abandon your site.”

Unfortunately, it’s really not that simple. Analytics show that users click more than 3 times all the time. And surveys and usability tests typically show no decrease in user satisfaction when they do so. In fact, the issue has been tackled by such leading lights as Jakob Nielsen and Jared Spool, neither of whom have found any real correlation between number of clicks and user satisfaction.

What they did find, though, was something a little more interesting. In particular, when the user’s choices are simple and straightforward, users are happy to drill down and click away. It’s when those choices aren’t so straightforward that problems arise.

Here’s how it works ... A straightforward path doesn’t involve much more than clicking. A less straightforward one, though, forces the user to think about each step. Add it all up, and the straightforward but “longer” path may take less time than the “shorter” but more confusing one. And even if it doesn’t, the user’s subjective impression will often make them think the more straightforward path actually took them less time and effort

Jakob Nielsen ties it to foraging theory and something called “information scent.” Like a fox after some rabbits, we users will stick with something as long as we’re pretty sure there’ll be a payoff in the end. If that particular woods or field or website doesn’t seem too promising, though, we’ll likely abandon it for happier hunting grounds. 

Clear labels give us good information scent, encouraging us to keep clicking. Poor labels – even if the “game” happens to be only a click away – unfortunately do not. 


Actually, Steve’s a lot more friendly
than this pic would make him seem

Tuesday, September 1, 2015

Help doesn't. (Rolf Molich)

Poor help. It just doesn’t get any respect. 

Whenever I run a test and the user seems ready to give up on a particular task, I always ask what they “would do now.” Most of the time, they say they would call. Some would prefer to chat, something that seems to be a lot more popular these days than in the past. And some, of course, simply want to abandon the task altogether, shut down their device, and make themselves a nice, stiff drink.

What people rarely mention is help. And that’s a shame, because they often have a decent chance of finding an answer there.

But people have also been trained to avoid help like the plague. There was a time – and there are still many systems that follow this model – where help was a completely separate system, identified only by a little help link in the upper right-hand corner.

When users clicked on that link, another window typically covered their screen. From there, they would search or browse for their particular question. So, in other words, total task interruption, major mode switching, and lots of time spent basically starting over from scratch. No wonder people shy away from help.

Given all that, how can we get people to use help again? First, don’t call it help. Second, get it out of the right hand corner of the screen. Interestingly, those two ideas are intimately related. Let me explain …

So, if you take help out of the upper right-hand corner, where are you going to put it? Well, one idea is to make it contextual, to put it on the page itself.  Face it, users typically want to get help on something they’re doing right now. They want to know what to put in that field. They want to know what button to choose. So, why not tell them then and there?

And, if you are doing that, why not go ahead and be more specific about what that help is going to be. Instead of putting “help” next to the field, why not just say, “What can I enter here?” or “Is this secure?” or whatever you think the user’s question might actually be.

A variation on this treatment is FAQs. And these are basically just all the questions the user might have on that particular page, but listed all together. Over the last few years, I’ve found FAQs have tested particularly well.

Context-sensitive and specific is definitely the way to go. Not only may they get the user to actually click on them, but they also might actually help the user as well.