Tuesday, February 4, 2020

There are two kinds of people in the world, those who believe there are two kinds of people in the world and those who don't. (Robert Benchley)

I’ve been interested in personality theory since I was in high school. My first love was Myers-Briggs. I still think it has its uses, but I also know it has absolutely no cred in the rarified world of research that I currently operate in. So, it’s been pretty much the Big Five ever since.

That said, I’ll pick up and read anything I can get my hands on if it deals however broadly with personality theory. Now, that does involve a fair amount of stuff that is rather pop psych, but I still find bits and pieces of it pretty useful. Heck, I got the quote for this post off my latest book (the Four Tendencies, by Gretchen Ruben, if you’re interested).

What does this have to do with UX research? Well, a lot, actually. It’s basically what’s behind personas. 

The basic idea here is that there are only so many types of users in the world. Now, this doesn’t mean that you divide your user base into INTJs and ESFPs, or see how many are Conscientious or Agreeable.  

Personas typically don’t care who their users are in essence. It’s much more who they are in certain situations – buying a house, investing, planning a vacation, as a medical patient …

The whole idea is that it’s a lot easier to handle a small universe of people by coming up with some kind of typology. And that, in turn, helps you not design something by default – i.e., for everybody, or for no one in particular, or for yourself …

Done properly, personas are perfect for truly reflecting the actual audience and allowing a design or product team to come up with something that actually is of use to someone and can, thus, be genuinely successful in the marketplace. And I think that’s a very reasonable way to go about it.

So, I guess that makes me the second type of person. Note, though, that that really means I’m someplace in the middle. I’m not throwing out personality typing altogether, nor am I being overly reductionist. Hmm, I wonder what kind of personality type that would make me?  I guess "it depends."


Robert Benchley represents another of my early interests – classic humor writing
(and actually wrote about the foibles of technology once – in his piece “The Railroad Problem”)

Friday, January 10, 2020

You’ll never have all the information you need to make a decision. If you did, it would be a foregone conclusion, not a decision. (David Mahoney)

As a usability engineer, I enjoy being popular. I like being in demand. I can’t get enough of it when teams want to engage me. It makes me feel validated. It makes me feel all warm and fuzzy.

At the same time, though, there is a certain role that I’m sometimes put in that I really don’t enjoy quite as much. And that’s when the team I support doesn’t want to do the dirty work of making decisions, but instead punts the ball to me.

Now, typically, I’m actually just fine with that role. The team can’t decide on which version of some widget to use? No problem. I’ll run a comparative test and get you some good feedback.

The team does need to be aware, though, that what I often find in these tests is that there is no clear favorite. In fact, the most common result is that each design has its pluses and minuses. So, it may not be exactly what you were looking for going in, but it’s still pretty darn valuable stuff.

Sometimes, though, I run across a situation where the team really is doing some major waffling. You can typically tell when they, for example, want to test 10 different designs, with the differences between them amounting to things like underlining, or capitalization, or which shade of green to use.

I exaggerate, of course. But I do believe that, when a team does something like that, they may be confusing usability testing with A/B testing or with a marketing survey. So, what I try to get them to do is to sit down, make some decisions, and narrow things down for me just a little bit.

Now, it’s not just a matter of making my life easier. I also try to get across the idea that they’re really not gaining anything from all that either. So, what I do is try to focus them on a handful of designs that really emphasize the things that are genuinely different, allowing us to focus on those, get some great feedback, and make the testing worthwhile for both of us.

Does that always make me popular? Well, no. But it typically is a pretty easy discussion to have, and the team’s almost always totally happy with the results.
Not a usability type, Mahoney was a very successful business exec 
and the author of Confessions of a Street-Smart Manager

Thursday, December 19, 2019

The fact that we’re not as logical as a computer is not a bug but a feature. (Chris Anderson)

I had an interesting interaction with an IA the other day. Responding to some tree-test feedback that I shared, she wondered if we should “do the logical thing” or “just to defer to users even when they're being illogical.”

I answered with something along these lines … First, I asked her whose logic she was referring to. Based on the feedback, her logic (or the logic of the system) seemed to differ from the logic of the users. 

In particular, we had one menu that listed a bunch of things an investor might want to research and invest in – stocks, bonds, mutual funds, ETFs … We also had another menu, called “Tools.” And, under that, was our ETF screener, a tool that allowed users to search for particular kinds of ETFs.  

The IA thought that, since the screener was a tool, it belonged in the Tools menu. My argument was that users seemed to think that this particular tool was a means to a larger end – i.e., they would use the tool to do their primary task, find an ETF to invest in. That the location under Tools generated a 0% success rate definitely helped my argument along as well.  ;^)

We also jousted about another menu item, “Fixed Income.” Now, that terms encompasses things like bonds, and annuities, and CDs, but it’s also something of an insider term – something the average investor might not be that familiar or comfortable with. 

Now, technically, she was right. The page actually included several types of fixed income investments. Bonds, though, made up 98% of what was offered. And bonds were what users were thinking of and looking for. So, my suggestion was to call it something that would resonate more with users – something along the lines of “Bonds & Fixed Income.” 

Needless to say, she pushed back with length issues, which I certainly respected. A little further research, though, showed that the page really only offered bonds and CDs. Our compromise, then, was exactly that – “Bonds & CDs.”

Now, you’d have thought that pushback like that might have come from someone more technical. I know I’ve had to fight that battle before with developers and with heavy-duty SMEs. Heck, though, everybody involved in UX probably needs to be reminded every once in awhile that users aren’t computers, and that their own quirky, very human version of logic is probably going to trump any others.

Here’s to computers! Here’s to humans! Vive la difference!




Chris was editor at WIRED, wrote The Long Tail & is currently the CEO at 3D Robotics

Monday, December 16, 2019

Usability testing is the killing field of cherished notions. (David Orr)

Wow! That’s kind of harsh. Maybe if we said “proving ground” instead.

Nah, this is way more descriptive. There are definitely times when there’s blood on the floor. 

Well, not really. But you know what I mean. It’s usually a matter of red faces, flared nostrils, big sighs, very tight smiles, killer glares … But it definitely does happen.

Face it, people fall in love with their stuff. It’s just the way human beings operate. (I don’t know, something about confirmation bias, backfire effect … You know, that sort of thing.)

And people tend to stay in love unless there’s something that happens that dissuades them otherwise. And that’s usually not a polite counter-argument in a meeting or a suggestion in an email. Sometimes, what it takes is a slap in the face. 

Hopefully, though, this won’t be coming as a total surprise. As a usability engineer with 30+ years of experience, I will definitely be giving you warning. In particular, I might speak up beforehand (if I get invited to the meetings). And, yes, when we actually start testing, we will definitely be covering what we’re seeing in debriefs (if you bother to watch the sessions, or stay afterwards).  And I’ll also be sending out those end-of-day and end-of-week topline reports as well (if you read them, that is). If, however, your first inkling that your baby may not be perfect is in the report-out, well yeah, it’s going to get a little messy. 

Recently, a designer joked before one of my report-outs that it was time for “Cliff to tear my design apart.” Now, that got me turning red a little. I helpfully pointed out that no, it was time to “get some feedback from our users.” 

Yeah, I know … It was a joke. It did give me a little perspective, though, on what it might be like to be on the other side of the bad news I sometimes have to deliver. Yup, usability is my “baby.” When someone doesn’t take it seriously, or when someone misinterprets it, I have a very similar emotional response. 

But, you know, it’s really not the same. I mean, I can make all sorts of arguments for the value of usability, and the value of usability data. On the other hand, if a usability test says that you’re baby’s ugly, there’s really not a lot you can come back with. I mean, if I’ve done my job properly, you’ve got the correct users, doing the correct tasks, on the correct system, and showing and telling you, bit by bit and piece by piece, exactly what the issues are.  

So, really, please ... just think of it as feedback.


David and I actually have a lot in common – English degrees; a mixed background in tech writing, instructional design & usability; about 30 years in the biz …

Friday, November 8, 2019

Garbage in, garbage out. (anonymous)

Now, I know this one applies to computing in general. It’d be a pretty easy stretch to apply it to user research, though, wouldn’t it?

I mean, if you don’t get the right users, you don’t get the right data, right? Same thing goes with a crappy test script or a poor prototype as well. 

And speaking of that last bit, I have noticed a huge difference over the years in the quality of the prototypes I put in front of my users. Now, is that because the skills of interaction designers are slowly eroding somehow? Actually, that’s not the case at all. In fact, I’d say those have been steadily improving.

In this case, what seems to be slowly eroding is the quality of the tools they have to work with. Hard to believe that we might be going backward in that regard, but there is no doubt in my mind that Invision, the tool du jour, is a far cry from the prototyping tools I used in the past, ones like Axure or iRise. Yeah, they weren’t that easy to use, but they sure did give me nice prototypes. Invision? To me at least, it seems like it’s maybe a notch above PowerPoint. Honestly, as it stands now, users can’t type in data entry fields! Try getting some realistic feedback with that!

To tell you the truth, it’s the same with some researcher tools as well. Like everyone else, my company is using UserTesting. For setting up a moderated test, it works like a charm. There are some serious issues with unmoderated tests though. For one, I can’t vary order. So, unless my test is a single task, I’m missing something that’s been basic to usability testing since the very beginning. There are plenty of other issues, but to me, not being able to control for order effect is a show stopper right there.

So, what’s the problem here? What is going on? I blame MVPs, minimum viable products. The going model these days seems to be not making a good product per se, but in getting something out there, capturing market share, and making yourself the only game in town. 

All the other stuff that might make your product truly useful and superior? Well, I guess you can take care of that when you get around to it.


Though sometimes attributed to IBMers George Fuechsel and William D. Mellin and dating back to 1960 or so, Atlas Obscura thinks it goes even further back

Friday, November 1, 2019

The computer can’t tell you the emotional story. It can give you the exact mathematical design, but what’s missing is the eyebrows. (Frank Zappa)

I’ll bet Frank never thought this quote would lead into a discussion of moderated vs. unmoderated usability testing. Sure enough, though, that’s what I thought about when I saw this one.

Now, I know some of these unmoderated tools do show you the user (and their eyebrows). The particular one I use, however, does not.

But even if I could see those eyebrows, there’s an even bigger part of an unmoderated test that’s missing. And that’s … me, the moderator.

Having run several thousand moderated usability tests, I know that what I do is a little bit more than just sit there. Now, part of what I do is fairly canned – prep, scenario descriptions, post-task check-ins (“So, how did it go?”), post-test check-ins (“So, overall, how did it go?”) …

I do, however, add some value outside all that. What if the user isn’t talking? What if the user is a bit vague or unclear? How do I probe or follow up? What if they don’t understand the task? What if they go off track? What if the user never gave us feedback on something we wanted them to? How do I reveal the correct answer when the user got it wrong? What if they don’t engage with the system fully? What if the prototype is a little sketchy? What if things aren’t totally linear and straightforward? What if something goes wrong on the technical side? What if, what if? 

Yeah, I know that unmoderated tests are fast, cheap, easy, and – at this point – ubiquitous as well. They’re not, however, for everyone and everything. For production systems – and, for prototypes, single screens or very linear flows –  they’re great. For anything more complex, though, they’re a bit of a gamble.  

I know the world is heading – at great speed – toward faster, quicker, and more automated. Now, that’s all fine and good. I do worry, though, that there still might be some times where we need those eyebrows.


Frank Zappa, taking a break during a heuristic review of some music software

Thursday, October 24, 2019

It’s amazing what you can accomplish, if you don’t care who gets the credit. (Harry Truman)

So, here’s my problem with collaboration … What? You're against collaboration? How can someone be against collaboration? (Don't worry - I'm not.) Now that I've got your attention, though, do please read on ...

Let me start off with a little story about when I used to teach. 

I used to teach tech writing at the local university. It was a night class, so I got a mix of traditional undergrads and working adults. The differences between the two tended to be pretty jarring. 

The traditional students were generally okay, but I found a lot of them tended to zone out. (I also got some who never came to class and then were shocked that I gave them an F on their mid-term grade!) The adults, though, were pretty much thoroughly engaged the whole time – asking questions, answering questions, sharing their own experiences, never missing class, coming on time …

What’s this got to do with anything? Well, I also used to give group projects, making sure I got a good mix on each team. Can you guess what happened? It wasn’t always the case, but I did find that the adults were likely to do all the heavy lifting, while the undergrads tended to sit back and let them do just that.

After a semester or two of frustration, I finally instituted a new scheme where members got to grade their peers, and individual grades on the project were a combo of the group grade plus the grade from your peers. It definitely improved the situation (though there were also some students who were in for a little life lesson as well).

Of course, in the real world, that kind of thing tends to weed itself out pretty quickly. Adults tend not to change jobs in the way that students might change classes, and that kind of behavior can catch up with you pretty quick.

In fact, I’ve tended to see just the opposite. Indeed, there are plenty of successful careers out there of people who were definitely part of the mix, but who also simply took undue credit along the way. And as one of those hard-working adult types, I always kind of resented that. 

These days, though, I’m much more apt to let it slide. Maybe it’s just being happy seeing something work for a change. Maybe it’s being more forgiving of human foible. Maybe it’s just the wisdom of age. Maybe it’s just not giving a flying … you know what.

Why is this worth a blog post though? Well, collaboration certainly is all the rage these days. Honestly, I'm not sure I've ever had someone interviewing for a position who, when asked what kind of culture they preferred, didn't say "collaborative." I think it just goes to you show you, though, how something as mom and apple pie as "collaboration" may have more to it than appears on the surface.


President Truman (and aide) doing some early 
in-home usability testing with consumer hardware