Tuesday, March 7, 2023

Anything seems possible if you don't know what you're talking about. (Anonymous)

“So, we need to slice and dice those users by age, household income, geography, gender, eye color, and Zodiac sign. And can we get 50 of each? And they’ll need to do all 63 tasks in that email I sent you 10 minutes ago. Oh, and can we get a full report by tomorrow?”

You know, sometimes it’s best if the user researcher leads the research effort. I don’t know. Just an idea. Might work.

I must admit, this only seems to come from folks who are brand new to usability. You know, the people who think usability testing is QA, or a focus group, or a marketing survey, or a combination of all three, or whatever.

So, education is definitely key in these situations. How to go about that though?

I’m always tempted to just say I’ve been doing this for 35 years, stopped counting at 4,000 users, and actually know what I’m doing. So, just shut up and let me do my job!

Now, that would be fun. There are, though, a couple of things that are a little more realistic that I’ve found particularly useful over the years. 

For one, if there are some unfamiliar names and faces in the meeting, is simply to ask if everyone’s familiar with usability research. If not, I can head a lot of this sort of thing off at the pass by giving a nice, simple, accurate definition. 

Another is to make sure I’m not alone. In other words, ensure that there are some people at the meeting who do know what a usability test is. Encourage them to pipe in. If they’re higher up the food chain than you, have them preface the meeting with a few words. In other words, feel free to gang up on your troublemakers. There is strength in numbers. 

And if it really comes down to it, my trump card is to ask, in the politest way possible of course, how they would feel if I were to create their wire frames for them, or write their content deck, or make their GANTT chart. We’re all professionals here, right? 



Thursday, February 2, 2023

Don’t fall in love with your work – fall in love with your users. (Me)

At least I think that that quote’s mine. A quick Google search did leave me with something similar from Dana Chisnell: “Want your users to fall in love with your designs? Fall in love with your users.” Great minds think alike, I guess.

Now, my question is, Why does this even need to be pointed out? The whole idea of being user-centric just seems so very basic.

As an official UX old fart, maybe I can give a little insight into this. Interestingly, it actually wasn’t always that way.

One change I’ve seen over the years is the rise of digital creatives. Back in the day, there simply weren’t that many of them around. And those that were there were often stuck doing things like writing manuals or designing the packaging that your fancy new software came in.

One thing I’ve noticed about today’s digital creatives is that they don’t always have their beginnings in UX. Now, that’s nothing new in this field. I myself have a background in English. Other researchers I know come from physics, education, philosophy, linguistics, journalism, even theatre …

A very common background for digital creatives these days seems to be from working at ad agencies. In that particular context, users are definitely part of the equation. Often, though, that focus is a little limited. Will someone open up this email? Will they pay attention to that commercial? Will anybody bother to pick up and read this brochure?

More important, though, is the client. As long as they’re happy, you’re golden … and pretty much ready to move on to your next client.  

Now, in house, marketing is definitely tracked. That’s always, though, been kind of a once-it’s-out-there kind of thing. It’s not really something that’s done beforehand, which is what usability testing is all about.

Up to that point, though, decisions seem to be very intuitive. Yes, what you’ve come up with will get batted around in a crit or some other meeting. It’s all still, though, very much at a gut level. 

Another issue is that agency experience focuses very much on selling. Now, there’s definitely a need for that on every website. There are, though, much more practical things to worry about – can I pay my bill, order tickets, make an appointment, find some particular piece of information …

And, here, the emphasis is less on how clever or appealing your design or words might be, but more on whether it works,  whether the user can get their task done. And a good way to do that is to focus less on yourself and your work and more on who whatever you’re designing is meant for. 

Happy Valentine’s Day!



Monday, January 9, 2023

Every time I hear an executive demand that someone "produce the next iPod," I want to respond, "Can you make yourself the next Steve Jobs?" (Carl Turner)

To be honest, all I really want the people at the “top of the house” (I hate the business jargon, but you know what I mean) to just leave me alone.   

And, over the years, I’ve been pretty lucky. There was only one time where I really had to deal with meddlesome execs.  Boy, though, was it a doozy.

What I was dealing with was a C-suite exec who wanted to incorporate some social media trope on a financial services site. I won’t tell you what that was, as it might give the game away. So, let’s just call them “awards.”

I think the idea was to gamify the boring task of handling one’s money. Now, there’s nothing wrong with gamification. It just needs to be done in the right context. Social media is a great example of that context, as are things like self-improvement, learning, and so on. Finances, however, can be a little dicey. 

Looking at an old report, I see that my users agreed with me: “You know, like I don't really expect to have to a earn an award. You know, this isn't a game – this is my bank account.”

They also pointed out how task-oriented they typically are on a finance site: “I’m here for one real purpose.  This is just extra noise that I don’t need when I’m really just trying to do some online banking.”

They also mentioned that they wanted to get something out of it:

  • “It doesn’t look like I gain anything from them, which is another reason why they’re not super interesting to me.  If it was something like, once you’ve been a member for two years, you’ll get a higher interest rate, then I might be interested in the badges.”
  • “So, unless it's gonna be pertinent to things that are happening to me that are affecting my bank account, I probably wouldn't wanna see them over there.”

Finally, users felt a little talked down to: “It’s like they're trying to gamify saving, which I feel like is kind of silly, and like they're treating me kind of like a child.”

Did they pay attention to my results? Of course not. It launched with few if any changes.

What’s interesting is that, once it was launched, the feedback was about the same. I lost track of what happened from there, I’m afraid.

Here’s hoping, though, that the exec got some understanding that that wasn’t such a brilliant idea after all, and that he should leave that sort of stuff to the professionals who he’s hired and pays good money to. Heck, I could have let him know both of those things right off the bat, and saved the company a lot of time and money.

Carl’s LinkedIn page lists him as a “chaos wrangler,” though I know him better as a usability engineer


Friday, December 2, 2022

Research that isn’t shared is research that hasn’t been done. (Lindsey Redinger)

It’s so tempting to just throw a research report over the transom. I’ve usually got another project in the works (if I’m not already juggling it), and it’s so much easier to not deal with politics and difficult personalities.  I just want to say, “Here’s the results,” and move on.  Prep, facilitation, and analysis are a lot more fun, and much more basic to what I do.

I figure the least I can do, though, is hold a meeting to go over the results.  I figure that it’s at least important to field questions, to make sure everyone understands what you’re trying to get across, to make sure there are no outstanding issues.

But, you know, for that to go over well, it really does help to get the team involved before that point.  And that means observers, and debriefs, and topline reports, and sharing tapes, and updates in team meetings.  Heck, it all really starts with good intake and planning meetings.

But it’s not all about the stuff that happens up to the report either. What happens after is just as important.  I’ve found that if you really want to have an impact, and actually have someone address the issues that came up in your research, there is no shortage of follow up you have to do.  And that means design meetings, and prioritization meetings, and technical meetings, and emails, and Slack messages, and hallway discussions.  

Writing a good report is kind of like getting a fish on the line.  That’s nice, but you’ve also got to hook him, land him, filet him, and cook him.  Now, I’m not sure how a fishing metaphor got in here but, heck, I’ll take it.

Maybe a better way to look at it is through writing fiction (something my wife does, and where that expression "throwing it over the transom" comes from).  It’s not enough just to send it to a publisher.  You may have to snag an agent first.  And to snag an agent, you’re going to have to do some schmoozing.  These days, you may even have to print, sell, and market it yourself.  I’m not sure an Emily Dickinson would have much luck in 2022.

To get back to UX research though …  One thing I’ve found over the years is that research ≠ report.  Honestly, sometimes it seems that once you’ve put your final touches on that great report of yours, your job has only really just begun.

Lindsey is head of research at Etsy


Friday, November 4, 2022

Love means never having to say you’re sorry. (Love Story)

Whuh? Huh? What the heck does that have to do with usability?

Oh, also, it’s a terrible quote. My wife and I have been married for almost  30 years. We say “sorry” quite a bit (and “thank you” as well).

Now, if this means you’re automatically forgiven, I guess that’s okay. They could have been a little more explicit though. Can you tell I'm an engineer?

I digress … How exactly does this apply to user research? For me, it kind of reminds me of my users. I’m sure we’ve all experienced – and probably been frustrated by – test participants who say things like, “I’m not good at computers” or “I shoulda got that” or “Oh, that’s my fault.”

Heck, the type even made it onto my typology of users. They’re called the “Charlie Brown” type. Note to millennials … Charlie Brown was the main character in the Peanuts cartoon. He was a sad sack character famous for his bad luck, morose disposition & for blaming himself when things go wrong.

I approach this user in several, graduated ways. If it’s just once or twice, I just let it go. If it comes up again, I usually give them a little encouragement – you know, “You’re doing just fine,” “This is good feedback,” “This is a test of the system, not of you” … 

If it’s still persistent (and these users can be persistent) and really starting to get in the way, I usually go into full pep talk mode. And that’s something along the lines of, “You can do no wrong here today. If there’s an issue, it’s an issue with the [site / app / software]. And I want to know about it.” I might also mention that they are the perfect user for this system, and if they can’t use it, other people won’t be able to either.

When it comes down to it, I really do love my users. All I want to do is make them feel comfortable sharing their thoughts – and never having to say they’re sorry for anything.




Tuesday, October 11, 2022

There is a big difference between what people think they want to know, what they say they want to know, and what they really want to know. (Carl Zetie)

When I used to work on-site, a familiar hallway or elevator greeting for me was often, “How’s the test going?”  How I answered could take many forms, depending upon who asked. The business usually simply wanted to know about test logistics (“Yup, halfway through”), the design team typically wanted to hear how cute their “baby” was (“very cute” was the proper response), and fellow UEs wanted to hear the horror stories (the worse, the better).

Report-outs are very similar. In this case, though, everyone usually wants to hear just the good stuff. Oh yeah, there are always hard-nosed business types or super-experienced designers who really do want to know what needs to be fixed. In general, however, most people really want to sit back, rub their hands together & “call it a wrap.”

It’s in test plan meetings, though, where you get the really interesting responses. Often, I get a lot of crickets, or blank faces. In that situation, I go straight into interview mode. Some of the things I ask involve: questions they want to get answers to, anything that seemed particularly problematic during design, anything that they argued over or are split on, anything that is crucial to the success of the product, their expected outcomes, what would constitute success to them …

Alternatively, I might get some super hair-splitting detail that often seems to come from someone’s particular bugaboo. In this case, I typically have to explain how 1) that might be rather hard to get feedback on, and 2) that’s really not what this test is for. (As for that last point, it’s not too surprising to find newbies who aren’t totally sure what a usability test is. A little education – especially that a usability test is not QA, a focus group, a survey, or an interview – is in order.)

I also have several generic topics that I can throw out anytime. These include things like navigation, content, look and feel, flow, graphics …

Finally, I also try to do my own quick review of the digital property in question beforehand and see what strikes me. Sharing my own thoughts and queries – “Do you think x, y and z are terms that this audience will know?” “Will the user know what to do on this page?” “Isn’t that link a little buried over there?” – often gets the ball rolling for the team as well.

Interestingly, I’ve even found that a test plan meeting where topics like these are addressed can sometime help down the road as well. If the team knows upfront there might be some possible issues with a particular page or flow or bit of wording or what have you, they’re more likely to remember that point throughout testing and in reporting out the results. 

I also find an added benefit is that you’re often able to share some good news as well. Remember that modal we were worried about? Absolutely no issues. That content that legal and compliance insisted on throwing in there? Nobody batted an eye. That graphic you all were arguing over? The users loved it!


Carl has been doing this stuff for quite awhile; has worked for IBM, Accenture, HP & Oracle; and is the author of Practical User Interface Design.

Tuesday, September 20, 2022

If I had asked people what they wanted, they would have said faster horses. (Henry Ford)

 Ah, yes, innovation. Where does it come from? It never seems to result when you simply ask users what they want. They simply don’t know, or would never be able to articulate it if they did. 

Steve Jobs said something very similar: “A lot of times, people don't know what they want until you show it to them.” I’ve already covered this elsewhere, so I’d like to try and put this idea in a little different context this time, a context a little closer to a user researcher’s heart.

I often get clients coming to me asking for faster horses. And by that, I simply mean that they ask me for what they already know. Usually, this means a survey, an interview, or a focus group. Everybody’s heard of those, right? 

In this situation, it’s my job to ask questions, to get at what they’re really after, to ignore the how and focus on the what and why. As a result, I will typically be giving them something new and innovative, something that they may never have heard of before. And what I’ll be suggesting to them is a usability test. 

Now, a usability test may not be all that new and innovative – if you’re a user researcher, that is.  For them, though, it definitely can be.

And that brings another thought to mind. We already have so many tools in the toolbox. Why is it so important for researchers to be always coming up with some new method?

Yes, new tools do definitely come along. And it’s super-important to be aware of them, add them to your arsenal, and be able to apply and use them in the right context.

I really do think, though, that innovation in research tools is overemphasized. Why might that be? I guess it’s a combination of not-invented-here-syndrome, looking good at your performance appraisal, impressing the higher-ups, giving your small consultancy a differentiator, wanting to get a publishing credit, etc.

The toolbox, though, is pretty well jam-packed with a number of tried-and-true, absolutely brilliant methods that have already passed the test of time and usefulness across the industry – usability testing itself, remote testing, unmoderated testing, ethnography, card sorts … I think the real skill is in understanding all the available tools, educating your clients on those, picking the right one & doing a bang-up job applying it.

BTW, I find this quote particularly rich coming from Henry Ford. I mean, wasn’t he the same guy who said, “You can have them in any color you want, boys, as long as they're black”?


His real innovations came in the factory