Wednesday, December 9, 2015

Fall in love with the problem, not the solution. (???)

I’ve actually seen this one in so many places, and attributed to so many people, that I’m a little leery of ascribing it to any one person. 

But what does it mean? Well, one thing that I see a lot these days is teams that are trying desperately to be innovative. Oftentimes, what this turns into is something along the lines of being innovative just for innovation’s sake. They might, for example, focus solely on some piece of technology (the Apple Watch, say), or some new style (e.g., Flat Design), or some new function (multi-touch gestures?), or some new method (gamification, anyone?). Whether that actually does something for their users, whether that actually solves a real user problem, seems to kind of get lost in the shuffle.

What these designers don’t realize is that, one, users really don’t care. Users typically just want to get their task done. If that involves all sorts of wild and crazy stuff, fine. If it simply involves boring things like tabs and radio buttons, well, they’re fine with that too.

What these designers also don’t realize is that, if they would only focus on the user’s actual problem, they might end up being very innovative indeed. In fact, a typical follow-up to the above quote is “… and the rest will follow.” What designers really need to understand is that all that cool stuff that they often fall in love with is simply a means to an end. 

So, how to identify, and focus on, those user problems? Well, I’ve always been a big fan of ethnographic research (also known as field studies). This method looks at users in their own context (the office, a retail store, their car, the couch at home), doing their own tasks, with their own goals in mind. That way, you can identify what’s working, what’s not working, the pain points, the gaps (and that involves the user’s whole experience, not just their interaction with computers, by the way). Next, all you need to do is sit down and analyze all the data that results (good old-fashioned affinity diagramming is my favorite way to do this). You can then brainstorm – and innovate – ‘til the cows come home.


Though I really couldn’t find a source for this quote,
a lot out there seems to point to these guys
(I'm not surprised)

Thursday, December 3, 2015

The point of testing is not to prove or disprove something. It’s to inform your judgement. (Steve Krug)

Unfortunately, there are a lot of people out there who want that proof. And the way they typically look for it is through numbers.

So, first, you’ve got your academics. They will be “purists,” and will insist on statistical significance and p-values and stuff like that. Next, you’ve got your marketing types. They’re into stats too. Finally, you’ve got your business folks. Once again, numbers types. 

So, the first thing I have to do is share that I’m actually not going to be giving anyone any real numbers (or at least not the kind of numbers they’ll be expecting). Then, I have to convince them that that’s not necessarily a bad thing. Finally, I have to break it to them that, yes, they will actually have to make some tough decisions (but much less tough than if they had nothing else to go on).

In accomplishing all that, the first thing I talk about is that usability testing necessarily means qualitative data. Now, these folks typically have some familiarity with that – e.g., through focus groups – so I always make sure to reference those. From there, I then go on to talk about trading numbers for richness. In particular, I like to point out that one great thing about a usability test is that you don’t have to guess, or impute, the reasons behind user behavior. Users will tell you exactly why they didn’t click on that button, why they preferred design A to design B, why they abandoned their shopping cart … And that can be pretty valuable stuff in coming up with buttons that they will click on, or designs that they will want to use, and shopping carts that they won’t abandon ...

Another thing that I point out is that usability testing focuses less on user preference, and more on whether something works or not. Note, though, that this does not mean QA. A system can be completely bug-less but still be impossible to use. Misnaming things, putting them in the wrong menus, burying them in footers, and so on can be just as effective in stopping a user in their tracks as a 404 page.

(And, yes, you really do need numbers for preference issues. Think of what goes on into deciding whether a feature should be added to your software. How many people would want it? How many of your main user base? How badly? Usability testing really should come after that decision, and focus on whether users can actually use the feature.)

Finally, though, I simply state that I am not calling the shots here. All I am doing is providing information. Executives may have very good business reasons behind somewhat dicey design decisions. All I want to do is make sure they know all the implications of those decisions. And what I’ve often found is that executives may not even be aware that those design decisions may result in a somewhat dicey user experience, or how dicey that experience may be. But after doing testing, well, they really don't have any excuses now, do they?

Steve’s alter ego