Monday, March 14, 2022

Even a bad usability test will help improve your software. (David Travis)

Hmm, not sure how I feel about this one. And I’ve seen plenty of bad usability tests over the years.

Now, I’m a firm believer that you can always get something out of a test. And maybe that’s all that Travis is getting at here. 

I know I’ve definitely screwed up at times. When it comes to users, for example, I’ve had not enough, overweighting in one area, a turkey or two … And there are also times when the prototype will be a little rough as well. Finally, I might also find that parts of my script might be less than ideal.

But those kind of things really don’t matter all that much. For users, for example, I find it’s pretty easy to just delete the ones that don’t work out. You can always reschedule new ones or, say, just go with 8 instead of 10.

As for prototypes, if it’s something major (e.g., it won’t even load), I’ll just scratch those tests as well, get that fixed & then try again. If it’s something minor (which is much more common) I’ll just keep the data, make sure the issue get fixed, and plow forward. That said, if it’s something that might affect very specific results, I might just throw out the data for those initial users, but write up the issue anyway.

Something very similar can also happen with script issues as well. To be honest, though, I’ve been doing this for 30+ years, so this is probably the thing that comes up the least.

Now, you’re probably thinking to yourself, Isn't there something missing here? And, yes indeed, there is – facilitation. Now, having stopped counting users at 5,000, I’ve pretty much seen everything. Though I did have some slip-ups & befuddlements in my earlier years, I feel pretty confident I can prep for & handle anything now.

If, however, you’re not a seasoned veteran, facilitation can be a real issue. Are you biasing the user? Are you giving away the answer? Are you interacting too much, turning your usability test into a friendly get-together with an old friend? Now, these can be biggies.

Something similar can happen with your screener too. A poor screener will simply give you the wrong audience, and what might be a problem for who you ended up getting may not be for your true audience (and vice versa, of course). But how would you ever know?

As for scripts, the cardinal sin here is typically giving the game away in the scenarios. You know, pseudo-tasks like "go to page x" - instead of going to page x for a very good reason (you want to buy y, you need to transfer money from a to b, you need to get from your house to place z …)

For prototypes, it’s mainly a question of whether it supports your tasks. And let’s hope those tasks are the right ones as well – common tasks, high-impact tasks & tasks that get at questions that your project team wants answers on. In other words, if you don’t have the right tasks, you’ll have incomplete (and possibly misleading) results.

You know, it could be just about anything in your test plan, to tell you the truth. What happens, for example, if your usability test should really be a focus group, or in-depth interviews, or an unmoderated test instead of a moderated ones?

Heck, though, if you’ve got methodology wrong, I’m not sure David’s quote would still stand. Hard to get orange juice out of an apple.