Friday, December 2, 2022

Research that isn’t shared is research that hasn’t been done. (Lindsey Redinger)

It’s so tempting to just throw a research report over the transom. I’ve usually got another project in the works (if I’m not already juggling it), and it’s so much easier to not deal with politics and difficult personalities.  I just want to say, “Here’s the results,” and move on.  Prep, facilitation, and analysis are a lot more fun, and much more basic to what I do.

I figure the least I can do, though, is hold a meeting to go over the results.  I figure that it’s at least important to field questions, to make sure everyone understands what you’re trying to get across, to make sure there are no outstanding issues.

But, you know, for that to go over well, it really does help to get the team involved before that point.  And that means observers, and debriefs, and topline reports, and sharing tapes, and updates in team meetings.  Heck, it all really starts with good intake and planning meetings.

But it’s not all about the stuff that happens up to the report either. What happens after is just as important.  I’ve found that if you really want to have an impact, and actually have someone address the issues that came up in your research, there is no shortage of follow up you have to do.  And that means design meetings, and prioritization meetings, and technical meetings, and emails, and Slack messages, and hallway discussions.  

Writing a good report is kind of like getting a fish on the line.  That’s nice, but you’ve also got to hook him, land him, filet him, and cook him.  Now, I’m not sure how a fishing metaphor got in here but, heck, I’ll take it.

Maybe a better way to look at it is through writing fiction (something my wife does, and where that expression "throwing it over the transom" comes from).  It’s not enough just to send it to a publisher.  You may have to snag an agent first.  And to snag an agent, you’re going to have to do some schmoozing.  These days, you may even have to print, sell, and market it yourself.  I’m not sure an Emily Dickinson would have much luck in 2022.

To get back to UX research though …  One thing I’ve found over the years is that research ≠ report.  Honestly, sometimes it seems that once you’ve put your final touches on that great report of yours, your job has only really just begun.

Lindsey is head of research at Etsy


Friday, November 4, 2022

Love means never having to say you’re sorry. (Love Story)

Whuh? Huh? What the heck does that have to do with usability?

Oh, also, it’s a terrible quote. My wife and I have been married for almost  30 years. We say “sorry” quite a bit (and “thank you” as well).

Now, if this means you’re automatically forgiven, I guess that’s okay. They could have been a little more explicit though. Can you tell I'm an engineer?

I digress … How exactly does this apply to user research? For me, it kind of reminds me of my users. I’m sure we’ve all experienced – and probably been frustrated by – test participants who say things like, “I’m not good at computers” or “I shoulda got that” or “Oh, that’s my fault.”

Heck, the type even made it onto my typology of users. They’re called the “Charlie Brown” type. Note to millennials … Charlie Brown was the main character in the Peanuts cartoon. He was a sad sack character famous for his bad luck, morose disposition & for blaming himself when things go wrong.

I approach this user in several, graduated ways. If it’s just once or twice, I just let it go. If it comes up again, I usually give them a little encouragement – you know, “You’re doing just fine,” “This is good feedback,” “This is a test of the system, not of you” … 

If it’s still persistent (and these users can be persistent) and really starting to get in the way, I usually go into full pep talk mode. And that’s something along the lines of, “You can do no wrong here today. If there’s an issue, it’s an issue with the [site / app / software]. And I want to know about it.” I might also mention that they are the perfect user for this system, and if they can’t use it, other people won’t be able to either.

When it comes down to it, I really do love my users. All I want to do is make them feel comfortable sharing their thoughts – and never having to say they’re sorry for anything.




Tuesday, October 11, 2022

There is a big difference between what people think they want to know, what they say they want to know, and what they really want to know. (Carl Zetie)

When I used to work on-site, a familiar hallway or elevator greeting for me was often, “How’s the test going?”  How I answered could take many forms, depending upon who asked. The business usually simply wanted to know about test logistics (“Yup, halfway through”), the design team typically wanted to hear how cute their “baby” was (“very cute” was the proper response), and fellow UEs wanted to hear the horror stories (the worse, the better).

Report-outs are very similar. In this case, though, everyone usually wants to hear just the good stuff. Oh yeah, there are always hard-nosed business types or super-experienced designers who really do want to know what needs to be fixed. In general, however, most people really want to sit back, rub their hands together & “call it a wrap.”

It’s in test plan meetings, though, where you get the really interesting responses. Often, I get a lot of crickets, or blank faces. In that situation, I go straight into interview mode. Some of the things I ask involve: questions they want to get answers to, anything that seemed particularly problematic during design, anything that they argued over or are split on, anything that is crucial to the success of the product, their expected outcomes, what would constitute success to them …

Alternatively, I might get some super hair-splitting detail that often seems to come from someone’s particular bugaboo. In this case, I typically have to explain how 1) that might be rather hard to get feedback on, and 2) that’s really not what this test is for. (As for that last point, it’s not too surprising to find newbies who aren’t totally sure what a usability test is. A little education – especially that a usability test is not QA, a focus group, a survey, or an interview – is in order.)

I also have several generic topics that I can throw out anytime. These include things like navigation, content, look and feel, flow, graphics …

Finally, I also try to do my own quick review of the digital property in question beforehand and see what strikes me. Sharing my own thoughts and queries – “Do you think x, y and z are terms that this audience will know?” “Will the user know what to do on this page?” “Isn’t that link a little buried over there?” – often gets the ball rolling for the team as well.

Interestingly, I’ve even found that a test plan meeting where topics like these are addressed can sometime help down the road as well. If the team knows upfront there might be some possible issues with a particular page or flow or bit of wording or what have you, they’re more likely to remember that point throughout testing and in reporting out the results. 

I also find an added benefit is that you’re often able to share some good news as well. Remember that modal we were worried about? Absolutely no issues. That content that legal and compliance insisted on throwing in there? Nobody batted an eye. That graphic you all were arguing over? The users loved it!


Carl has been doing this stuff for quite awhile; has worked for IBM, Accenture, HP & Oracle; and is the author of Practical User Interface Design.

Tuesday, September 20, 2022

If I had asked people what they wanted, they would have said faster horses. (Henry Ford)

 Ah, yes, innovation. Where does it come from? It never seems to result when you simply ask users what they want. They simply don’t know, or would never be able to articulate it if they did. 

Steve Jobs said something very similar: “A lot of times, people don't know what they want until you show it to them.” I’ve already covered this elsewhere, so I’d like to try and put this idea in a little different context this time, a context a little closer to a user researcher’s heart.

I often get clients coming to me asking for faster horses. And by that, I simply mean that they ask me for what they already know. Usually, this means a survey, an interview, or a focus group. Everybody’s heard of those, right? 

In this situation, it’s my job to ask questions, to get at what they’re really after, to ignore the how and focus on the what and why. As a result, I will typically be giving them something new and innovative, something that they may never have heard of before. And what I’ll be suggesting to them is a usability test. 

Now, a usability test may not be all that new and innovative – if you’re a user researcher, that is.  For them, though, it definitely can be.

And that brings another thought to mind. We already have so many tools in the toolbox. Why is it so important for researchers to be always coming up with some new method?

Yes, new tools do definitely come along. And it’s super-important to be aware of them, add them to your arsenal, and be able to apply and use them in the right context.

I really do think, though, that innovation in research tools is overemphasized. Why might that be? I guess it’s a combination of not-invented-here-syndrome, looking good at your performance appraisal, impressing the higher-ups, giving your small consultancy a differentiator, wanting to get a publishing credit, etc.

The toolbox, though, is pretty well jam-packed with a number of tried-and-true, absolutely brilliant methods that have already passed the test of time and usefulness across the industry – usability testing itself, remote testing, unmoderated testing, ethnography, card sorts … I think the real skill is in understanding all the available tools, educating your clients on those, picking the right one & doing a bang-up job applying it.

BTW, I find this quote particularly rich coming from Henry Ford. I mean, wasn’t he the same guy who said, “You can have them in any color you want, boys, as long as they're black”?


His real innovations came in the factory


Friday, September 2, 2022

Stop trying to help. You’re making it worse. (Jenny Lawson)

Remember pop ups? Remember the bad old time before pop-up blockers? Man, talk about something that totally shot the user experience.

Well, believe it or not, they’re still around. This time, though, they’re for the site you’re already on. Sign up for our email (which you already get)! Watch this dumb video (about the article you’re already halfway through)! Don’t leave us (though you already know I’ll be back tomorrow)!

They remind me of commercials for the TV channel you’re already watching. What’s the matter – couldn’t you sell any advertising time? Is the content so weak that you think I’ll never be back? Don’t I already watch this channel all the time anyway?

I mean, this sort of thing could be helpful for the user/viewer. Maybe I’m new, and need to know all that you have to offer. Maybe I do want to follow you on social media. Maybe there is related content that I might be interested in.

And, then again, maybe not. My guess is that all these distractions are more helpful for the business than the user. Indeed, it’s a fine line between helping the business and annoying the user.

Unfortunately, it seems this particular issue has also bled from simple marketing into actual functionality. There certainly are a lot of things popping up within software these days that would fit the bill. It seems I can't move my cursor around without a million little things invading my screen – coachmarks, tool tips, little messages, menus …

Take, for example, the MS task bar. If I move my cursor down that way, little pics shows me what’s currently in that particular application. Unfortunately, they stay on the screen until I click elsewhere. (And, whatever you do, don’t click the x in the corner of the pic – it’ll shut that app down.)

While bashing MS, let me also include the Design Ideas that pop up every time I start a PowerPoint presentation. I’ve never used these and doubt I ever will. Could they just not show them? At least let me say “don’t show it to me again” on the first time I see it, and not force me to do so on the second. But, then again, what do you expect from the folks who brought you Clippy?

Now, let’s take a look at some folks I really respect, and whose stuff I use all the time – UserTesting. Par exemple, when I go to the timeline at the bottom of a tape (to go to a certain spot, to make a clip, etc.), I have to move my cursor through something I think they call the “sentiment bar.” It includes what they think are positive or negative comments, shown by little green and red markers. Clever idea, but I don’t really use it, and every time I go down to the timeline, the actual good or bad comments pop up and get it in my way.

Now, this all might seem a little niggly, but for me at least, it’s typically death by a thousand cuts, if not the ol’ Chinese water torture. 

Once again, it seems we’ve got something that’s possibly useful, but really probably just distracting. In a way, it’s really just marketing for the software. I’m marketed to enough already. Please limit your invasions to a minimum in what I once thought was a safe space – the software I use to do my job.

Jenny Lawson turned mom blogging into a career as an author


Friday, August 26, 2022

It’s a poor carpenter who blames his tools. (anonymous)

Once again, I beg to differ. 

Internal tools, for example, are notoriously hard to use. These are typically sold on feature sets and price, with the poor employee who has to use the thing usually having no input whatsoever. In addition, that unfortunate person simply can’t just walk away from their tool, like they can if they were a consumer on a bad website or looking to upgrade some personal software and shopping around. Internal tools can have a surprisingly long shelf life. 

Now, all that is pretty much a given. Everyone knows that even the best companies are not going to give the same effort and attention to internal tools that they will to customer-facing ones. It’s just a fact of life.

What I really don’t like, though, is the rather blase, blame-it-on-the-user attitude that this quote implies. If it’s not something that we would ever do with customers, why is it so okay with internal users? 

As an anonymous programmer said on a discussion group I found:

“Da Vinci with a mop and a bucket of mud may be a better painter than you, but he would never beat Da Vinci with quality tools.”

And how many Da Vincis are you surrounded by at work? Also, isn’t ease of use part of “quality” anyway? 

I guess I’m commenting on this now because I’ve been seeing a lot of this lately in the newer tools that come out. Every tool out there seems to have so many implicit functions, so many cryptic icons, and so little text explaining anything. And they all seem to assume we are all expert users who use their platform all the time (and likely no others). 

Building on top of that is the number of tools the average employee is expected to master. For me, it’s the Microsoft Office suite, plus a dozen usability and market research tools, some designer tools (Invision, Sketch, Axure …) plus “productivity” tools (Jira, Confluence, Trello …), just as many “communication” tools (Slack, Teams, email …), and dozens of internal nightmares (i.e., different vendors for travel, expenses, training, benefits, insurance …).

Maybe the issue is really just TMT – too many tools!  So, take 100 different tools (all with competing UIs – and most of very questionable quality), mix together, shake vigorously, and watch the chaos ensue!  



Thursday, July 21, 2022

Usability is not everything. If usability engineers designed a nightclub, it would be clean, quiet, brightly lit ... But nobody would be there. They would all be down the street at Coyote Ugly pouring beer on each other. (Joel Spolsky)

I beg to differ.  ;^)

Now, I do like to share quotes here that might be a little critical. I also, though, like to answer these, to give my own take on them, to maybe raise some counterclaims – especially if they’re pretty popular, like this one here.

I also like being an engineer. Yup, I’m a little old-fashioned that way. Sure, I usually normally call myself a “user researcher,” but something really needs to be said for being an engineer. For one, it’s important to know that what I’m telling you is really not just my opinion. There’s something behind all this. (Heck, my results might even be replicable!)

At the same time, I’m not a pure, academic researcher. My focus is on the practical. I’m also involved quite closely in design. “Engineer” perfectly describes this role for me.

I’m also pretty good about staying in my lane. I let designers and writers get as clever and innovative and “kewl” as they like – as long it doesn’t interfere with usability. 

There’s something more important here, though, that really gets under my skin. And that’s context. There are many sites and apps and software applications, with many different users and many different purposes. 

Most of the time, however, users want to do something very practical – find info, buy tickets, transfer money, get directions to somewhere … I’m not sure any of these folks want beer poured on them.

Now, that said, there is certainly room for some fun. Once again, though, it all depends on the context. A medical system that’s used in the ER? Probably not. Another dating app? Sure, why not.

Actually, my guess here is Spolsky was probably having a little fun with this quote. I’m sure he fully understands the value – and limitations – of usability engineers. There sure aren’t any usability engineers (or interaction designers) I know that are designing nightclubs that I know of.

Joel’s a software engineer – he should know all about this stuff!


Monday, June 27, 2022

If you define yourself by your opinions, questioning them is a threat to your integrity. (Adam Grant)

Boy, do I run into a lot of defensiveness. Usability feedback almost always seem to generate some personal sensitivity. Heck, if this was my quote, I'd probably substitute "self" for "integrity."

But that’s understood. The design I might have been testing is usually at least somebody’s baby. And nobody likes it when their baby gets called ugly.

Now, I do try to mitigate that by including positive results as well; by being diplomatic with feedback overall; and by making sure that any feedback is backed up by numbers, quotes, and clips (and triangulation with other findings, and 3rd party research, and whatever else I can muster …).

There are some team members, though, who seem to always take user feedback as a personal affront. Over the years, though, I have sensed an inverse correlation between a designer or content person’s skill/experience/maturity and their likelihood of being offended. In fact, I’ve often joked that the best of these folks can’t wait to get something in front of users, while the worst will do everything in their power to make sure testing doesn’t happen (or that the results get ignored).

I guess my advice for the latter would be two-fold. First, have some basis for your design or content. Don’t just dash off and come up with something “kewl.” Have some reason why you’re using a new layout, or why you’re fore-fronting that particular bit of content or, really, any decision that you’ve had to make. Show that you’ve actually thought about this, that you’ve considered things from multiple sides, that you’ve done your homework. 

And as Grant points out, one of the best things you can do in this regard is to question your own opinions yourself. Don’t just wait for others to weigh in. 

Now, be sure to be open to others’ feedback as well. But, if you’ve already anticipated some of that, you’ll probably find yourself a lot less defensive, as well as a lot more confident in your opinions (and with good reason, this time!). And remember, it’s not about justification! It’s about being well-informed, and being able to engage in a good give-and-take.

Second, nothing is set in stone. Sure, you’ve probably come up with something pretty decent. Others, though, might be able to catch something you overlooked. And though peers, management, clients, and SMEs are important here (crits can be great), definitely don’t ignore the user.  

In fact, try to simply start thinking of them first. I’ve found that if you get that part right, it’s a lot easier to convince the other stakeholders as well. In general, all the other pieces just seem to fall into place.

Finally, just remember that your opinion is not you. Actually, another way to think about that is a little counter-intuitive … Your opinion should be a lot like you, an ideal you – open-minded, dynamic, open to change, ever evolving.

Adam is a prof at Wharton Biz School, where he specializes in organizational psychology

Thursday, June 2, 2022

Don’t make me read. (William Hudson)

I think we’re all familiar with Steve Krug’s, “Don’t make me think.” I think a necessary corollary to that, though, is this gem. 

It’s kinda funny … I actually thought I had come up with this on my own. A quick search of the Interwebs, though, showed me that a lot of people liked to share this one (and maybe even thought they had come up with it themselves as well).

As far as I could ascertain, though, it seemed like Horton might have been the first. And he has indeed been around for quite awhile. 

I’ve probably got a half a dozen posts on here that treat on this very topic. I won’t repeat those here. To be honest, I’m mainly including think this quote because it sums them all up so well.

It also, though, brings up an interesting thing about Horton (and me as well). Though he was one of the founding fathers when it came to usability and usability-centered design, he quickly moved to instructional design, especially e-learning. I was also formerly an instructional designer, read his books way back when, and still remember him fondly.

And what that really goes to show is that reading is a not-so-popular activity in many situations. In training, though, this is particularly the case, and instructional designers have come up with many ways to address this. This is, in fact, behind what are called “multiple modalities.” And that’s just a fancy way of saying some people like to read (like in a textbook), some like to listen (say, in a classroom), others to view it (e,g,, in a video), and some to actually try it themselves.

And that is something that I’d like to see more of in online content in general, whether on a public site or an authenticated one. But that idea is something that instructional design anticipated as well.

It actually ties back to an idea called “Employee Performance Support Solutions.” Introduced back in the early 90s by Gloria Gery, it simply posits that anything that can support the user – text, video, exercises, sand boxes, knowledge management systems, Slack channels, whatever – should be right there, at the user’s fingertips. 

So, what means is that making content usable is a lot more than just simply editing text for length, or scannability, or hiding less important info behind links, or using a pyramid structure … What it really means is thinking about the user, and fulfilling their needs wherever they are, and whoever they might be. One thing it’s definitely not about is any kind of silos.

Horton actually moved from text altogether, switching from e-learning to making a living as a photographer

Friday, May 13, 2022

Visually simple appearance does not result in ease of use (Don Norman)

Just ran into this the other day. Basically, some of my designers wanted to combine some features on a stock screener, a page where you can enter multiple search criteria to find a stock you might be interested in purchasing.

In particular, they wanted to combine the search fields with x’s, to remove any particular criteria from the search. Before, we had made that a little bit more explicit with pills above the search results table to indicate the user’s search criteria. Their argument was that the pills had cluttered up the screen. 

I saluted their efforts, but brought up the idea that, though they had saved some space, they might have also made things more difficult to figure out. Will users see the x? Will they understand what it does? Will they get what is basically is now a modes issue?  Norman goes on to say that “simple appearance can make control more difficult, more arbitrary, require memorization, and be subject to multiple forms of error.”

I actually see this quite a bit. Another example is hiding controls – e.g., only showing them when the user mouses over them. Very subtle, minimal design is another favorite of designers. 

For the latter, I lay a lot of the blame on the Metro design style. That style makes quite a bit of the UI rather implicit. Can I click on this? Is this a header or a link? What does that icon mean? Is this supposed to be a tab? What happens if I mouse over here? To designers, though, it all just looks “sleek,” “elegant,” and “gorgeous.”

Whatever they happen to be coming up with, the designers I work with never quite seem to get my argument. That’s kind of ironic, as they love to talk about the idea of “affordance” (I think they may have picked that up from me). Unfortunately, I’m not quite sure they make the connection between the “clunky” things they are so busy getting rid of and the affordance that those things actually provide. 

Don and his famous cap

Monday, March 14, 2022

Even a bad usability test will help improve your software. (David Travis)

Hmm, not sure how I feel about this one. And I’ve seen plenty of bad usability tests over the years.

Now, I’m a firm believer that you can always get something out of a test. And maybe that’s all that Travis is getting at here. 

I know I’ve definitely screwed up at times. When it comes to users, for example, I’ve had not enough, overweighting in one area, a turkey or two … And there are also times when the prototype will be a little rough as well. Finally, I might also find that parts of my script might be less than ideal.

But those kind of things really don’t matter all that much. For users, for example, I find it’s pretty easy to just delete the ones that don’t work out. You can always reschedule new ones or, say, just go with 8 instead of 10.

As for prototypes, if it’s something major (e.g., it won’t even load), I’ll just scratch those tests as well, get that fixed & then try again. If it’s something minor (which is much more common) I’ll just keep the data, make sure the issue get fixed, and plow forward. That said, if it’s something that might affect very specific results, I might just throw out the data for those initial users, but write up the issue anyway.

Something very similar can also happen with script issues as well. To be honest, though, I’ve been doing this for 30+ years, so this is probably the thing that comes up the least.

Now, you’re probably thinking to yourself, Isn't there something missing here? And, yes indeed, there is – facilitation. Now, having stopped counting users at 5,000, I’ve pretty much seen everything. Though I did have some slip-ups & befuddlements in my earlier years, I feel pretty confident I can prep for & handle anything now.

If, however, you’re not a seasoned veteran, facilitation can be a real issue. Are you biasing the user? Are you giving away the answer? Are you interacting too much, turning your usability test into a friendly get-together with an old friend? Now, these can be biggies.

Something similar can happen with your screener too. A poor screener will simply give you the wrong audience, and what might be a problem for who you ended up getting may not be for your true audience (and vice versa, of course). But how would you ever know?

As for scripts, the cardinal sin here is typically giving the game away in the scenarios. You know, pseudo-tasks like "go to page x" - instead of going to page x for a very good reason (you want to buy y, you need to transfer money from a to b, you need to get from your house to place z …)

For prototypes, it’s mainly a question of whether it supports your tasks. And let’s hope those tasks are the right ones as well – common tasks, high-impact tasks & tasks that get at questions that your project team wants answers on. In other words, if you don’t have the right tasks, you’ll have incomplete (and possibly misleading) results.

You know, it could be just about anything in your test plan, to tell you the truth. What happens, for example, if your usability test should really be a focus group, or in-depth interviews, or an unmoderated test instead of a moderated ones?

Heck, though, if you’ve got methodology wrong, I’m not sure David’s quote would still stand. Hard to get orange juice out of an apple.



Tuesday, January 4, 2022

If you don’t know where you’re going, you’ll end up someplace else. (Yogi Berra)

I’ve been doing usability tests for 35 years now. And I still am never attempted to just “wing it.”

For me, even the simplest test needs at least some kind of test plan & some kind of script. In fact, I’m a firm believer that:
  1. A good planning meeting means a good test plan
  2. A good test plan means good prep
  3. Good prep means easy facilitation
  4. Easy facilitation means good test sessions
  5. Good test sessions mean good data
  6. Good data means good analysis
  7. Good analysis means good reporting
  8. Good reporting means your team’s actually motivated to changes that will affect the user experience in a positive way
Now, I’m not saying that every test needs to be a formal effort. If you’re at least cognizant of the above steps & maybe write some of them down on paper, you should be fine. The bigger the effort, though, the more formal you’ll probably want to make it.

Whatever the size of your project, probably the biggest benefit you’ll get out of good planning is something that you may not have considered, confidence. It might just be me, but I don’t work very well when I’m flustered. I forget things, I get things out of order, I say the wrong thing, I hem and haw, I don’t exactly inspire confidence in my user (or my observers) … 

Now, I do know some other folks who honestly do thrive under pressure. In fact, those kind of people often need a little jolt of some kind like to really get things rolling. You know the types – that guy in college who was always pulling all-nighters, the exec who can make off-the-cuff remarks in front of 200 people, that usability engineer who’s never run a pilot test in their life.

To each his own, but usability tests typically have way too many moving parts to ever really “wing it.”

Yogi and some cutting-edge tech (for the 1950s, that is)