Tuesday, October 11, 2022

There is a big difference between what people think they want to know, what they say they want to know, and what they really want to know. (Carl Zetie)

When I used to work on-site, a familiar hallway or elevator greeting for me was often, “How’s the test going?”  How I answered could take many forms, depending upon who asked. The business usually simply wanted to know about test logistics (“Yup, halfway through”), the design team typically wanted to hear how cute their “baby” was (“very cute” was the proper response), and fellow UEs wanted to hear the horror stories (the worse, the better).

Report-outs are very similar. In this case, though, everyone usually wants to hear just the good stuff. Oh yeah, there are always hard-nosed business types or super-experienced designers who really do want to know what needs to be fixed. In general, however, most people really want to sit back, rub their hands together & “call it a wrap.”

It’s in test plan meetings, though, where you get the really interesting responses. Often, I get a lot of crickets, or blank faces. In that situation, I go straight into interview mode. Some of the things I ask involve: questions they want to get answers to, anything that seemed particularly problematic during design, anything that they argued over or are split on, anything that is crucial to the success of the product, their expected outcomes, what would constitute success to them …

Alternatively, I might get some super hair-splitting detail that often seems to come from someone’s particular bugaboo. In this case, I typically have to explain how 1) that might be rather hard to get feedback on, and 2) that’s really not what this test is for. (As for that last point, it’s not too surprising to find newbies who aren’t totally sure what a usability test is. A little education – especially that a usability test is not QA, a focus group, a survey, or an interview – is in order.)

I also have several generic topics that I can throw out anytime. These include things like navigation, content, look and feel, flow, graphics …

Finally, I also try to do my own quick review of the digital property in question beforehand and see what strikes me. Sharing my own thoughts and queries – “Do you think x, y and z are terms that this audience will know?” “Will the user know what to do on this page?” “Isn’t that link a little buried over there?” – often gets the ball rolling for the team as well.

Interestingly, I’ve even found that a test plan meeting where topics like these are addressed can sometime help down the road as well. If the team knows upfront there might be some possible issues with a particular page or flow or bit of wording or what have you, they’re more likely to remember that point throughout testing and in reporting out the results. 

I also find an added benefit is that you’re often able to share some good news as well. Remember that modal we were worried about? Absolutely no issues. That content that legal and compliance insisted on throwing in there? Nobody batted an eye. That graphic you all were arguing over? The users loved it!


Carl has been doing this stuff for quite awhile; has worked for IBM, Accenture, HP & Oracle; and is the author of Practical User Interface Design.