Tuesday, June 11, 2019

Writing reports doesn’t change anything. Acting on the findings does. (William Horton)

If there’s one thing that gets under a usability engineer’s skin it’s ignoring their findings. 

Now, I realize that UEs are also realists as well. Yes, we are smart enough to know that legitimate business decisions can trump all. And we are also aware that schedules and deadlines might mean some change will not make it into this release (though we do assume it will be in the next one). Finally, we also realize that some changes are harder than others (though there are, of course, those developers or vendors for whom every change seems difficult and costly). 

And experienced UEs also appreciate that negotiation is an inevitable part of any process. They pick and choose their battles. No use falling on your sword for a missing comma or a particular shade of yellow.

Those kinds of UEs also realize that not everything is worth changing. In fact, I think there’s no surer way of showing your greenness than by expecting that all issues are equal, and that you will get your way with everything that came up in the test just because … it came up in the test. 

(That last bit is especially a problem for UEs who think a report is a simple dump of everything that happened. I’m always amazed at the number of UEs who seem to feel obliged to report one-offs or simple, straight-up observations. Yeah, that’s interesting – especially if you’re a researcher – but may not be that actionable to your audience.)

So, all we ask is that you seriously consider what we heard back from YOUR USERS!  Don’t like our suggested fix? That’s fine. Do address it somehow though. Got a good reason for going with something else? No problem. Do tell me a little more about that though please.

I once did an audit (at a former company) looking at which clients actually acted on issues and suggestions from test reports and which did not. Interestingly, the one client who complained the loudest and dragged their heels the most were the ones who made the most changes. Conversely, the one who seemed the most enthusiastic, spoke our language, and got along with us the best rarely made any.

In other words, the latter seemed to think that simply running a test was what usability testing was all about. Maybe it was an exposure thing. Maybe it was interesting in itself, but not really worth getting all worked up about. Maybe it was just magical thinking. 

The former, though, realized that their job was just beginning after a test was over, and that they were actually going to have to roll up their sleeves and do some hard work. Funny … Looking back, I think I actually preferred working with those guys.