And that’s because the typical marketeer is very data-driven. Web analytics, VOC (voice of the customer), CSAT (customer satisfaction surveys), NPS (net promoter score), KPIs (key performance indicators) … Throw about a dozen brightly-colored graphs and charts on a screen and lots and lots of numbers, then call it a “dashboard,” and these people are in heaven.
Now, a usability report does not look a lot like a dashboard. So, there’s typically a couple of points I like to make with this group.
First, this is qualitative research. They’re usually familiar with focus groups, so if you can make this connection, you’re already halfway there.
Second, like all good qualitative research, usability testing gets at the “why” of the issue. Yes, it’s nice to know that A/B testing showed a definite preference for B, but wouldn’t it be nice to know why that is? Maybe we can use a similar strategy for our next design. Could work.
Third, this kind of research is not about preferences. What it is is about bugs. Now, these are not the broken links, infinite loops, etc. marketeers may associate with QA. To the user, though, they might as well be. These are the obvious things – the mislabeled buttons, the hidden links, the missing help – that trip lots of people up. In fact, at this point, I usually bring up the old story of the bunched-up carpet in the hallway, and ask the marketeers how many people they would need to see trip over it before they would fix it.
Finally, this kind of research goes before launch. Marketeers really can’t say that about their nice, shiny statistics. In fact, once you start counting, whatever it is you want to count will be out there for all the world to see. Wouldn’t it be nice to get a little feedback on what you might expect before everyone else sees it?
Jim Lewis has been doing great work at IBM since 1981! He has a PhD and is the author of Quantifying the User Experience: Practical Statistics for User Research, with Jeff Sauro.