Monday, December 16, 2019

Usability testing is the killing field of cherished notions. (David Orr)

Wow! That’s kind of harsh. Maybe if we said “proving ground” instead.

Nah, this is way more descriptive. There are definitely times when there’s blood on the floor. 

Well, not really. But you know what I mean. It’s usually a matter of red faces, flared nostrils, big sighs, very tight smiles, killer glares … But it definitely does happen.

Face it, people fall in love with their stuff. It’s just the way human beings operate. (I don’t know, something about confirmation bias, backfire effect … You know, that sort of thing.)

And people tend to stay in love unless there’s something that happens that dissuades them otherwise. And that’s usually not a polite counter-argument in a meeting or a suggestion in an email. Sometimes, what it takes is a slap in the face. 

Hopefully, though, this won’t be coming as a total surprise. As a usability engineer with 30+ years of experience, I will definitely be giving you warning. In particular, I might speak up beforehand (if I get invited to the meetings). And, yes, when we actually start testing, we will definitely be covering what we’re seeing in debriefs (if you bother to watch the sessions, or stay afterwards).  And I’ll also be sending out those end-of-day and end-of-week topline reports as well (if you read them, that is). If, however, your first inkling that your baby may not be perfect is in the report-out, well yeah, it’s going to get a little messy. 

Recently, a designer joked before one of my report-outs that it was time for “Cliff to tear my design apart.” Now, that got me turning red a little. I helpfully pointed out that no, it was time to “get some feedback from our users.” 

Yeah, I know … It was a joke. It did give me a little perspective, though, on what it might be like to be on the other side of the bad news I sometimes have to deliver. Yup, usability is my “baby.” When someone doesn’t take it seriously, or when someone misinterprets it, I have a very similar emotional response. 

But, you know, it’s really not the same. I mean, I can make all sorts of arguments for the value of usability, and the value of usability data. On the other hand, if a usability test says that you’re baby’s ugly, there’s really not a lot you can come back with. I mean, if I’ve done my job properly, you’ve got the correct users, doing the correct tasks, on the correct system, and showing and telling you, bit by bit and piece by piece, exactly what the issues are.  

So, really, please ... just think of it as feedback.


David and I actually have a lot in common – English degrees; a mixed background in tech writing, instructional design & usability; about 30 years in the biz …

Friday, November 8, 2019

Garbage in, garbage out. (anonymous)

Now, I know this one applies to computing in general. It’d be a pretty easy stretch to apply it to user research, though, wouldn’t it?

I mean, if you don’t get the right users, you don’t get the right data, right? Same thing goes with a crappy test script or a poor prototype as well. 

And speaking of that last bit, I have noticed a huge difference over the years in the quality of the prototypes I put in front of my users. Now, is that because the skills of interaction designers are slowly eroding somehow? Actually, that’s not the case at all. In fact, I’d say those have been steadily improving.

In this case, what seems to be slowly eroding is the quality of the tools they have to work with. Hard to believe that we might be going backward in that regard, but there is no doubt in my mind that Invision, the tool du jour, is a far cry from the prototyping tools I used in the past, ones like Axure or iRise. Yeah, they weren’t that easy to use, but they sure did give me nice prototypes. Invision? To me at least, it seems like it’s maybe a notch above PowerPoint. Honestly, as it stands now, users can’t type in data entry fields! Try getting some realistic feedback with that!

To tell you the truth, it’s the same with some researcher tools as well. Like everyone else, my company is using UserTesting. For setting up a moderated test, it works like a charm. There are some serious issues with unmoderated tests though. For one, I can’t vary order. So, unless my test is a single task, I’m missing something that’s been basic to usability testing since the very beginning. There are plenty of other issues, but to me, not being able to control for order effect is a show stopper right there.

So, what’s the problem here? What is going on? I blame MVPs, minimum viable products. The going model these days seems to be not making a good product per se, but in getting something out there, capturing market share, and making yourself the only game in town. 

All the other stuff that might make your product truly useful and superior? Well, I guess you can take care of that when you get around to it.


Though sometimes attributed to IBMers George Fuechsel and William D. Mellin and dating back to 1960 or so, Atlas Obscura thinks it goes even further back

Friday, November 1, 2019

The computer can’t tell you the emotional story. It can give you the exact mathematical design, but what’s missing is the eyebrows. (Frank Zappa)

I’ll bet Frank never thought this quote would lead into a discussion of moderated vs. unmoderated usability testing. Sure enough, though, that’s what I thought about when I saw this one.

Now, I know some of these unmoderated tools do show you the user (and their eyebrows). The particular one I use, however, does not.

But even if I could see those eyebrows, there’s an even bigger part of an unmoderated test that’s missing. And that’s … me, the moderator.

Having run several thousand moderated usability tests, I know that what I do is a little bit more than just sit there. Now, part of what I do is fairly canned – prep, scenario descriptions, post-task check-ins (“So, how did it go?”), post-test check-ins (“So, overall, how did it go?”) …

I do, however, add some value outside all that. What if the user isn’t talking? What if the user is a bit vague or unclear? How do I probe or follow up? What if they don’t understand the task? What if they go off track? What if the user never gave us feedback on something we wanted them to? How do I reveal the correct answer when the user got it wrong? What if they don’t engage with the system fully? What if the prototype is a little sketchy? What if things aren’t totally linear and straightforward? What if something goes wrong on the technical side? What if, what if? 

Yeah, I know that unmoderated tests are fast, cheap, easy, and – at this point – ubiquitous as well. They’re not, however, for everyone and everything. For production systems – and, for prototypes, single screens or very linear flows –  they’re great. For anything more complex, though, they’re a bit of a gamble.  

I know the world is heading – at great speed – toward faster, quicker, and more automated. Now, that’s all fine and good. I do worry, though, that there still might be some times where we need those eyebrows.


Frank Zappa, taking a break during a heuristic review of some music software

Thursday, October 24, 2019

It’s amazing what you can accomplish, if you don’t care who gets the credit. (Harry Truman)

So, here’s my problem with collaboration … What? You're against collaboration? How can someone be against collaboration? (Don't worry - I'm not.) Now that I've got your attention, though, do please read on ...

Let me start off with a little story about when I used to teach. 

I used to teach tech writing at the local university. It was a night class, so I got a mix of traditional undergrads and working adults. The differences between the two tended to be pretty jarring. 

The traditional students were generally okay, but I found a lot of them tended to zone out. (I also got some who never came to class and then were shocked that I gave them an F on their mid-term grade!) The adults, though, were pretty much thoroughly engaged the whole time – asking questions, answering questions, sharing their own experiences, never missing class, coming on time …

What’s this got to do with anything? Well, I also used to give group projects, making sure I got a good mix on each team. Can you guess what happened? It wasn’t always the case, but I did find that the adults were likely to do all the heavy lifting, while the undergrads tended to sit back and let them do just that.

After a semester or two of frustration, I finally instituted a new scheme where members got to grade their peers, and individual grades on the project were a combo of the group grade plus the grade from your peers. It definitely improved the situation (though there were also some students who were in for a little life lesson as well).

Of course, in the real world, that kind of thing tends to weed itself out pretty quickly. Adults tend not to change jobs in the way that students might change classes, and that kind of behavior can catch up with you pretty quick.

In fact, I’ve tended to see just the opposite. Indeed, there are plenty of successful careers out there of people who were definitely part of the mix, but who also simply took undue credit along the way. And as one of those hard-working adult types, I always kind of resented that. 

These days, though, I’m much more apt to let it slide. Maybe it’s just being happy seeing something work for a change. Maybe it’s being more forgiving of human foible. Maybe it’s just the wisdom of age. Maybe it’s just not giving a flying … you know what.

Why is this worth a blog post though? Well, collaboration certainly is all the rage these days. Honestly, I'm not sure I've ever had someone interviewing for a position who, when asked what kind of culture they preferred, didn't say "collaborative." I think it just goes to you show you, though, how something as mom and apple pie as "collaboration" may have more to it than appears on the surface.


President Truman (and aide) doing some early 
in-home usability testing with consumer hardware

Tuesday, October 1, 2019

Too much Design Thinking and you're jumping off cliffs. Too much "Research Thinking" and you'll never get out of bed. (Joe Grant)

The pendulum swings again. Right now, we seem to be pretty firmly in cliff-diving mode. Not too long ago, though, we were all in a definite can’t-get-out-of-bed state.

Yup, traditional user research did tend to be kinda slow. Now, that may simply reflect how much slower things were back in the day, but it also definitely reflects how much academia influenced research way back when. Indeed, there was a time when all researchers had PhDs, wore white lab coats, worked in on-site labs, and wrote 30-page papers for each month-long test they ran. But all that simply reflected how they had been trained academically. They just took what they knew and applied it to a different situation.

Usability engineering was, in fact, a reaction to some of the issues with that approach. The “engineering” part meant that researchers weren’t doing pure research anymore, and that practical applications – and means and methods – would give corporate clients a lot more bang for their buck.  So, quicker, faster, more focused, more actionable, more affordable …

These days, though, that’s probably not enough. Overall, there is a huge emphasis on speed – in Agile, in Design Thinking, in Lean UX … heck, in life in general. 

I guess the question here, to me at least, is whether things might be going a little too fast. I’m personally familiar with Design Thinking projects where research meant chatting a few people up at the local food court, and evaluation meant stopping people on the street to show them a couple of screens. 

Yup, that’s cliff diving alright. Hope you’re a really skilled diver. That water looks like it’s a long way away. And those rocks sure do look like they could hurt a body. You are a professional, right?

Hopefully, one day, the pendulum will be a little more in the middle. Who knows, though. By that time, something else will come along the pike, and the pendulum will be swinging in a completely different direction. 


Joe's been doing UX for 30 years,
and is currently working at Enterprise

Tuesday, September 10, 2019

I just like to know. (Winnie the Pooh)

Researchers are like that. They really do just want to know.

And that makes them a little bit different from everyone else they work with. They have no axe to grind, no dog in the fight, no skin in the game … whatever cliché you happen to favor.

Honestly, they just want to know if something’s going to work or not. Everyone else seems to have an agenda. The designer is probably just pulling for what they came up with. Their manager, on the other hand, may simply not want anything to come up that might make them look bad. The business probably has some pet idea that they want to make sure gets baked in somehow. Developers might want to make use of some cool widget they just saw somewhere. And the executive vice president … well, who knows what they want or are thinking? (Hopefully, they’ll just go away.)

Now, that’s not to say that a usability engineer might not have some predictions. But, like a true scientist, they will put those aside and, instead, root for real knowledge. I am actually right a surprising amount of the time (hey, 30 years, 4000 users), but the times I’m not are the ones I remember and enjoy the most. 

And that’s because I am adding to the corpus of knowledge. Now, that can mean something at a pretty low level (that page really does need some online help, and my team really needs to know that) but at a pretty high one as well (help really adds a lot to a system, but it needs to be contextual and speak the user’s language - and pretty much everyone in UX needs to know that).

The whole idea, though, is to keep it humble. In fact, I am much more psyched about a test where I was wrong than one where I was right. How often does that happen among the rest of the team? I’ve actually found some experienced designers who are right there with me all of the way. For the rest of them, though, I think they could take some advice from lowly ol' Pooh Bear.

By the way, here’s the full passage:

Pooh was sitting in his house one day, counting his pots of honey, when there came a knock on the door.
“Fourteen," said Pooh. "Come in. Fourteen.  Or was it fifteen? Bother. That's muddled me."
"Hallo, Pooh," said Rabbit.
"Hallo, Rabbit. Fourteen, wasn't it?"
"What was?"
"My pots of honey what I was counting."
"Fourteen, that's right."
"Are you sure?"
"No," said Rabbit. "Does it matter?"
"I  just  like to know," said Pooh humbly, "So as I can say to myself: 'I've got  fourteen  pots  of  honey  left.'  Or fifteen, as the case may be. It's sort of comforting."

Hmm, had no idea Pooh was a quant.
Image result for winnie the pooh technology
Winnie the Pooh meets technology - 
Technology wins

Monday, September 9, 2019

People’s minds are changed through observation, and not through argument. (Will Rogers)

Yup, that Will Rogers. You know, the cowboy humorist? Western actor? Newspaper columnist? Radio personality?  Vaudeville performer?

Kind of like Mark Twain, though, Will Rogers had so much native sense that his downhome sayings can be applied to almost anything – even something as esoteric as usability and UX. To tell you the truth, I’m a little surprised that this quote was actually so direct. Surely, this must have been translated from something with an “ain’t” sprinkled here and a “fixin-to” sprinkled there. Honestly, it sounds more like something Jakob Nielsen might have said.

Be that as it may, it is, quiet honestly, the whole secret of our profession. You know, it seems like everybody’s got an opinion about design – from the designer, to the writer, to the IA, to the developer, to marketing, to the developer, to the VP … But you know whose opinion really matters?  The user’s!

And how do we best get their opinion?  Well, people have come up with quite a number of different ways to do so.  I’ve touched on those in a number of different posts:

What’s really best, though, is good, old-fashioned usability testing.  I don’t think there’s a better way to get rich, unbiased, and convincing data to take things out of the realm of conjecture and guide discussion down real, practical avenues that can lead to solutions that will really mean something for the customer. 

And, guess what?  As a usability engineer, you get to do just that!




Will Rogers also said: