So, first, you’ve got your academics. They will be “purists,” and will insist on statistical significance and p-values and stuff like that. Next, you’ve got your marketing types. They’re into stats too. Finally, you’ve got your business folks. Once again, numbers types.
So, the first thing I have to do is share that I’m actually not going to be giving anyone any real numbers (or at least not the kind of numbers they’ll be expecting). Then, I have to convince them that that’s not necessarily a bad thing. Finally, I have to break it to them that, yes, they will actually have to make some tough decisions (but much less tough than if they had nothing else to go on).
In accomplishing all that, the first thing I talk about is that usability testing necessarily means qualitative data. Now, these folks typically have some familiarity with that – e.g., through focus groups – so I always make sure to reference those. From there, I then go on to talk about trading numbers for richness. In particular, I like to point out that one great thing about a usability test is that you don’t have to guess, or impute, the reasons behind user behavior. Users will tell you exactly why they didn’t click on that button, why they preferred design A to design B, why they abandoned their shopping cart … And that can be pretty valuable stuff in coming up with buttons that they will click on, or designs that they will want to use, and shopping carts that they won’t abandon ...
Another thing that I point out is that usability testing focuses less on user preference, and more on whether something works or not. Note, though, that this does not mean QA. A system can be completely bug-less but still be impossible to use. Misnaming things, putting them in the wrong menus, burying them in footers, and so on can be just as effective in stopping a user in their tracks as a 404 page.
(And, yes, you really do need numbers for preference issues. Think of what goes on into deciding whether a feature should be added to your software. How many people would want it? How many of your main user base? How badly? Usability testing really should come after that decision, and focus on whether users can actually use the feature.)
Finally, though, I simply state that I am not calling the shots here. All I am doing is providing information. Executives may have very good business reasons behind somewhat dicey design decisions. All I want to do is make sure they know all the implications of those decisions. And what I’ve often found is that executives may not even be aware that those design decisions may result in a somewhat dicey user experience, or how dicey that experience may be. But after doing testing, well, they really don't have any excuses now, do they?
(I wonder if this guy actually has a name)