Thursday, December 19, 2019

The fact that we’re not as logical as a computer is not a bug but a feature. (Chris Anderson)

I had an interesting interaction with an IA the other day. Responding to some tree-test feedback that I shared, she wondered if we should “do the logical thing” or “just to defer to users even when they're being illogical.”

I answered with something along these lines … First, I asked her whose logic she was referring to. Based on the feedback, her logic (or the logic of the system) seemed to differ from the logic of the users. 

In particular, we had one menu that listed a bunch of things an investor might want to research and invest in – stocks, bonds, mutual funds, ETFs … We also had another menu, called “Tools.” And, under that, was our ETF screener, a tool that allowed users to search for particular kinds of ETFs.  

The IA thought that, since the screener was a tool, it belonged in the Tools menu. My argument was that users seemed to think that this particular tool was a means to a larger end – i.e., they would use the tool to do their primary task, find an ETF to invest in. That the location under Tools generated a 0% success rate definitely helped my argument along as well.  ;^)

We also jousted about another menu item, “Fixed Income.” Now, that terms encompasses things like bonds, and annuities, and CDs, but it’s also something of an insider term – something the average investor might not be that familiar or comfortable with. 

Now, technically, she was right. The page actually included several types of fixed income investments. Bonds, though, made up 98% of what was offered. And bonds were what users were thinking of and looking for. So, my suggestion was to call it something that would resonate more with users – something along the lines of “Bonds & Fixed Income.” 

Needless to say, she pushed back with length issues, which I certainly respected. A little further research, though, showed that the page really only offered bonds and CDs. Our compromise, then, was exactly that – “Bonds & CDs.”

Now, you’d have thought that pushback like that might have come from someone more technical. I know I’ve had to fight that battle before with developers and with heavy-duty SMEs. Heck, though, everybody involved in UX probably needs to be reminded every once in awhile that users aren’t computers, and that their own quirky, very human version of logic is probably going to trump any others.

Here’s to computers! Here’s to humans! Vive la difference!




Chris was editor at WIRED, wrote The Long Tail & is currently the CEO at 3D Robotics

Monday, December 16, 2019

Usability testing is the killing field of cherished notions. (David Orr)

Wow! That’s kind of harsh. Maybe if we said “proving ground” instead.

Nah, this is way more descriptive. There are definitely times when there’s blood on the floor. 

Well, not really. But you know what I mean. It’s usually a matter of red faces, flared nostrils, big sighs, very tight smiles, killer glares … But it definitely does happen.

Face it, people fall in love with their stuff. It’s just the way human beings operate. (I don’t know, something about confirmation bias, backfire effect … You know, that sort of thing.)

And people tend to stay in love unless there’s something that happens that dissuades them otherwise. And that’s usually not a polite counter-argument in a meeting or a suggestion in an email. Sometimes, what it takes is a slap in the face. 

Hopefully, though, this won’t be coming as a total surprise. As a usability engineer with 30+ years of experience, I will definitely be giving you warning. In particular, I might speak up beforehand (if I get invited to the meetings). And, yes, when we actually start testing, we will definitely be covering what we’re seeing in debriefs (if you bother to watch the sessions, or stay afterwards).  And I’ll also be sending out those end-of-day and end-of-week topline reports as well (if you read them, that is). If, however, your first inkling that your baby may not be perfect is in the report-out, well yeah, it’s going to get a little messy. 

Recently, a designer joked before one of my report-outs that it was time for “Cliff to tear my design apart.” Now, that got me turning red a little. I helpfully pointed out that no, it was time to “get some feedback from our users.” 

Yeah, I know … It was a joke. It did give me a little perspective, though, on what it might be like to be on the other side of the bad news I sometimes have to deliver. Yup, usability is my “baby.” When someone doesn’t take it seriously, or when someone misinterprets it, I have a very similar emotional response. 

But, you know, it’s really not the same. I mean, I can make all sorts of arguments for the value of usability, and the value of usability data. On the other hand, if a usability test says that you’re baby’s ugly, there’s really not a lot you can come back with. I mean, if I’ve done my job properly, you’ve got the correct users, doing the correct tasks, on the correct system, and showing and telling you, bit by bit and piece by piece, exactly what the issues are.  

So, really, please ... just think of it as feedback.


David and I actually have a lot in common – English degrees; a mixed background in tech writing, instructional design & usability; about 30 years in the biz …

Friday, November 8, 2019

Garbage in, garbage out. (anonymous)

Now, I know this one applies to computing in general. It’d be a pretty easy stretch to apply it to user research, though, wouldn’t it?

I mean, if you don’t get the right users, you don’t get the right data, right? Same thing goes with a crappy test script or a poor prototype as well. 

And speaking of that last bit, I have noticed a huge difference over the years in the quality of the prototypes I put in front of my users. Now, is that because the skills of interaction designers are slowly eroding somehow? Actually, that’s not the case at all. In fact, I’d say those have been steadily improving.

In this case, what seems to be slowly eroding is the quality of the tools they have to work with. Hard to believe that we might be going backward in that regard, but there is no doubt in my mind that Invision, the tool du jour, is a far cry from the prototyping tools I used in the past, ones like Axure or iRise. Yeah, they weren’t that easy to use, but they sure did give me nice prototypes. Invision? To me at least, it seems like it’s maybe a notch above PowerPoint. Honestly, as it stands now, users can’t type in data entry fields! Try getting some realistic feedback with that!

To tell you the truth, it’s the same with some researcher tools as well. Like everyone else, my company is using UserTesting. For setting up a moderated test, it works like a charm. There are some serious issues with unmoderated tests though. For one, I can’t vary order. So, unless my test is a single task, I’m missing something that’s been basic to usability testing since the very beginning. There are plenty of other issues, but to me, not being able to control for order effect is a show stopper right there.

So, what’s the problem here? What is going on? I blame MVPs, minimum viable products. The going model these days seems to be not making a good product per se, but in getting something out there, capturing market share, and making yourself the only game in town. 

All the other stuff that might make your product truly useful and superior? Well, I guess you can take care of that when you get around to it.


Though sometimes attributed to IBMers George Fuechsel and William D. Mellin and dating back to 1960 or so, Atlas Obscura thinks it goes even further back

Friday, November 1, 2019

The computer can’t tell you the emotional story. It can give you the exact mathematical design, but what’s missing is the eyebrows. (Frank Zappa)

I’ll bet Frank never thought this quote would lead into a discussion of moderated vs. unmoderated usability testing. Sure enough, though, that’s what I thought about when I saw this one.

Now, I know some of these unmoderated tools do show you the user (and their eyebrows). The particular one I use, however, does not.

But even if I could see those eyebrows, there’s an even bigger part of an unmoderated test that’s missing. And that’s … me, the moderator.

Having run several thousand moderated usability tests, I know that what I do is a little bit more than just sit there. Now, part of what I do is fairly canned – prep, scenario descriptions, post-task check-ins (“So, how did it go?”), post-test check-ins (“So, overall, how did it go?”) …

I do, however, add some value outside all that. What if the user isn’t talking? What if the user is a bit vague or unclear? How do I probe or follow up? What if they don’t understand the task? What if they go off track? What if the user never gave us feedback on something we wanted them to? How do I reveal the correct answer when the user got it wrong? What if they don’t engage with the system fully? What if the prototype is a little sketchy? What if things aren’t totally linear and straightforward? What if something goes wrong on the technical side? What if, what if? 

Yeah, I know that unmoderated tests are fast, cheap, easy, and – at this point – ubiquitous as well. They’re not, however, for everyone and everything. For production systems – and, for prototypes, single screens or very linear flows –  they’re great. For anything more complex, though, they’re a bit of a gamble.  

I know the world is heading – at great speed – toward faster, quicker, and more automated. Now, that’s all fine and good. I do worry, though, that there still might be some times where we need those eyebrows.


Frank Zappa, taking a break during a heuristic review of some music software

Thursday, October 24, 2019

It’s amazing what you can accomplish, if you don’t care who gets the credit. (Harry Truman)

So, here’s my problem with collaboration … What? You're against collaboration? How can someone be against collaboration? (Don't worry - I'm not.) Now that I've got your attention, though, do please read on ...

Let me start off with a little story about when I used to teach. 

I used to teach tech writing at the local university. It was a night class, so I got a mix of traditional undergrads and working adults. The differences between the two tended to be pretty jarring. 

The traditional students were generally okay, but I found a lot of them tended to zone out. (I also got some who never came to class and then were shocked that I gave them an F on their mid-term grade!) The adults, though, were pretty much thoroughly engaged the whole time – asking questions, answering questions, sharing their own experiences, never missing class, coming on time …

What’s this got to do with anything? Well, I also used to give group projects, making sure I got a good mix on each team. Can you guess what happened? It wasn’t always the case, but I did find that the adults were likely to do all the heavy lifting, while the undergrads tended to sit back and let them do just that.

After a semester or two of frustration, I finally instituted a new scheme where members got to grade their peers, and individual grades on the project were a combo of the group grade plus the grade from your peers. It definitely improved the situation (though there were also some students who were in for a little life lesson as well).

Of course, in the real world, that kind of thing tends to weed itself out pretty quickly. Adults tend not to change jobs in the way that students might change classes, and that kind of behavior can catch up with you pretty quick.

In fact, I’ve tended to see just the opposite. Indeed, there are plenty of successful careers out there of people who were definitely part of the mix, but who also simply took undue credit along the way. And as one of those hard-working adult types, I always kind of resented that. 

These days, though, I’m much more apt to let it slide. Maybe it’s just being happy seeing something work for a change. Maybe it’s being more forgiving of human foible. Maybe it’s just the wisdom of age. Maybe it’s just not giving a flying … you know what.

Why is this worth a blog post though? Well, collaboration certainly is all the rage these days. Honestly, I'm not sure I've ever had someone interviewing for a position who, when asked what kind of culture they preferred, didn't say "collaborative." I think it just goes to you show you, though, how something as mom and apple pie as "collaboration" may have more to it than appears on the surface.


President Truman (and aide) doing some early 
in-home usability testing with consumer hardware

Tuesday, October 1, 2019

Too much Design Thinking and you're jumping off cliffs. Too much "Research Thinking" and you'll never get out of bed. (Joe Grant)

The pendulum swings again. Right now, we seem to be pretty firmly in cliff-diving mode. Not too long ago, though, we were all in a definite can’t-get-out-of-bed state.

Yup, traditional user research did tend to be kinda slow. Now, that may simply reflect how much slower things were back in the day, but it also definitely reflects how much academia influenced research way back when. Indeed, there was a time when all researchers had PhDs, wore white lab coats, worked in on-site labs, and wrote 30-page papers for each month-long test they ran. But all that simply reflected how they had been trained academically. They just took what they knew and applied it to a different situation.

Usability engineering was, in fact, a reaction to some of the issues with that approach. The “engineering” part meant that researchers weren’t doing pure research anymore, and that practical applications – and means and methods – would give corporate clients a lot more bang for their buck.  So, quicker, faster, more focused, more actionable, more affordable …

These days, though, that’s probably not enough. Overall, there is a huge emphasis on speed – in Agile, in Design Thinking, in Lean UX … heck, in life in general. 

I guess the question here, to me at least, is whether things might be going a little too fast. I’m personally familiar with Design Thinking projects where research meant chatting a few people up at the local food court, and evaluation meant stopping people on the street to show them a couple of screens. 

Yup, that’s cliff diving alright. Hope you’re a really skilled diver. That water looks like it’s a long way away. And those rocks sure do look like they could hurt a body. You are a professional, right?

Hopefully, one day, the pendulum will be a little more in the middle. Who knows, though. By that time, something else will come along the pike, and the pendulum will be swinging in a completely different direction. 


Joe's been doing UX for 30 years,
and is currently working at Enterprise

Tuesday, September 10, 2019

I just like to know. (Winnie the Pooh)

Researchers are like that. They really do just want to know.

And that makes them a little bit different from everyone else they work with. They have no axe to grind, no dog in the fight, no skin in the game … whatever cliché you happen to favor.

Honestly, they just want to know if something’s going to work or not. Everyone else seems to have an agenda. The designer is probably just pulling for what they came up with. Their manager, on the other hand, may simply not want anything to come up that might make them look bad. The business probably has some pet idea that they want to make sure gets baked in somehow. Developers might want to make use of some cool widget they just saw somewhere. And the executive vice president … well, who knows what they want or are thinking? (Hopefully, they’ll just go away.)

Now, that’s not to say that a usability engineer might not have some predictions. But, like a true scientist, they will put those aside and, instead, root for real knowledge. I am actually right a surprising amount of the time (hey, 30 years, 4000 users), but the times I’m not are the ones I remember and enjoy the most. 

And that’s because I am adding to the corpus of knowledge. Now, that can mean something at a pretty low level (that page really does need some online help, and my team really needs to know that) but at a pretty high one as well (help really adds a lot to a system, but it needs to be contextual and speak the user’s language - and pretty much everyone in UX needs to know that).

The whole idea, though, is to keep it humble. In fact, I am much more psyched about a test where I was wrong than one where I was right. How often does that happen among the rest of the team? I’ve actually found some experienced designers who are right there with me all of the way. For the rest of them, though, I think they could take some advice from lowly ol' Pooh Bear.

By the way, here’s the full passage:

Pooh was sitting in his house one day, counting his pots of honey, when there came a knock on the door.
“Fourteen," said Pooh. "Come in. Fourteen.  Or was it fifteen? Bother. That's muddled me."
"Hallo, Pooh," said Rabbit.
"Hallo, Rabbit. Fourteen, wasn't it?"
"What was?"
"My pots of honey what I was counting."
"Fourteen, that's right."
"Are you sure?"
"No," said Rabbit. "Does it matter?"
"I  just  like to know," said Pooh humbly, "So as I can say to myself: 'I've got  fourteen  pots  of  honey  left.'  Or fifteen, as the case may be. It's sort of comforting."

Hmm, had no idea Pooh was a quant.
Image result for winnie the pooh technology
Winnie the Pooh meets technology - 
Technology wins

Monday, September 9, 2019

People’s minds are changed through observation, and not through argument. (Will Rogers)

Yup, that Will Rogers. You know, the cowboy humorist? Western actor? Newspaper columnist? Radio personality?  Vaudeville performer?

Kind of like Mark Twain, though, Will Rogers had so much native sense that his downhome sayings can be applied to almost anything – even something as esoteric as usability and UX. To tell you the truth, I’m a little surprised that this quote was actually so direct. Surely, this must have been translated from something with an “ain’t” sprinkled here and a “fixin-to” sprinkled there. Honestly, it sounds more like something Jakob Nielsen might have said.

Be that as it may, it is, quiet honestly, the whole secret of our profession. You know, it seems like everybody’s got an opinion about design – from the designer, to the writer, to the IA, to the developer, to marketing, to the developer, to the VP … But you know whose opinion really matters?  The user’s!

And how do we best get their opinion?  Well, people have come up with quite a number of different ways to do so.  I’ve touched on those in a number of different posts:

What’s really best, though, is good, old-fashioned usability testing.  I don’t think there’s a better way to get rich, unbiased, and convincing data to take things out of the realm of conjecture and guide discussion down real, practical avenues that can lead to solutions that will really mean something for the customer. 

And, guess what?  As a usability engineer, you get to do just that!




Will Rogers also said:

Thursday, July 25, 2019

Our product is a slot machine that plays you. (Ramsey Brown)

Oh dear! That’s not good.

And just in case you think you may have misheard that, I’ll have you know that Mr. Brown is the CEO of the aptly – and rather bluntly – named Dopamine Labs. Yup, this outfit aims to “hack user engagement and retention using models from neuroscience,” “change … human behavior with unprecedented ease,” and “rewire user behavior and drive your KPIs.” Charming.

Of course, there’s all sorts of palaver about using it only for good … But it does sound, though, like their honesty may have provided us with something of an honest-to-goodness smoking gun when it comes to the motivation behind tech addiction.

I mean, we’re all familiar with the classic quote, “If you are not paying for it, you're not the customer; you're the product.” Now, that's not exactly something that can be traced back to Jack Dorsey or Mark Zuckerberg, right?

But here, we have a quote that we can trace directly back to a VC-funded Silicon Valley outfit. They’re certainly no Facebook or Twitter or Snapchat ... at least, at this point.

What is he actually talking about? Why, intermittent reward, of course. What’s that? Well, it’s a psychological principle that dates all the way back to B.F. Skinner. It basically states that if you never get a reward, you’ll give up; if you always get the same reward, you’ll eventually get bored; but if you get a reward seemingly randomly, you’ll get hooked. It’s the idea behind slot machines … and email, and Facebook, and all sorts of social media and tech in general.

The dopamine connection? It’s the neurotransmitter that drives all this. Though many people mistakenly identify dopamine with pleasure, it actually drives seeking behavior. 

And this basic human chemical – one that we are largely unaware of, have no control over, but that drives large parts of our behavior – has been hacked by Silicon Valley to make rich people richer, with no real thought to any of the consequences or ethics involved. Charming indeed.


This is either B.F. Skinner or some kind of space alien

Tuesday, July 23, 2019

Just because it isn’t done doesn’t mean it can’t be done. Just because it can be done doesn’t mean it should be done. (Barry Glasford)

I was listening to something on NPR today about the new version of The Lion King, which just came out. In case you’re not au courant with all things Disney, the new one is all CGI, with absolutely no animation, unlike the first one. One of the panelists hated the new version, and used almost this same quote to justify his stance. Even though I haven’t seen either version, he made some excellent points, and I heartily agreed with him. 

Personally, I’m familiar with the quote from my own field, but I can definitely see where it could apply almost anywhere. In fact, a quick Google search led me to links related to the Bible, feminism, travel, self-help, and – OMG! – Disney’s new Lion King.

Interestingly, though, most of those results focused on the first part of the saying. Now, to me, there’s no real insight in that. That’s basically a, “Well, duh, so what?” 

The real wisdom is in the second part. In other words, this is really a matter of balance. So, in addition to being innovative and creative and ground-breaking and all, we also have to be aware of the possibility of conflicting goals (and unintended consequences as well).

I think that’s especially important in the field of UX. So, while designers, developers, and marketeers may have fallen in love with some new “kewl” way of doing things, the team really does need to ask itself whether that’s genuinely helpful, or right for this audience, or for this context – or whether it simply gets in the way (or is even impossible to understand or use). Otherwise, all you’re really doing is showing off.

A perfect example of this just happened recently at work - one of my designers came up with a scrolling marquee. Now, in our field, brokerage, this does make some sense. You’re probably familiar with old ticker-tape-style marquees outside and inside actual brokerage offices. So, this is really just putting something like that online. And there is at least one competitor who does that as well.

At the same time, though, there are also some good arguments against it – it’s distracting, there are accessibility issues, it can remind users of those cheesy marquees on amateur sites that date back to the 90’s …

Well, I wasn’t able to convince anyone to ditch it. But it did prompt this post. And we’ll definitely see who comes out on top after a little usability testing.




Tuesday, June 11, 2019

Writing reports doesn’t change anything. Acting on the findings does. (William Horton)

If there’s one thing that gets under a usability engineer’s skin it’s ignoring their findings. 

Now, I realize that UEs are also realists as well. Yes, we are smart enough to know that legitimate business decisions can trump all. And we are also aware that schedules and deadlines might mean some change will not make it into this release (though we do assume it will be in the next one). Finally, we also realize that some changes are harder than others (though there are, of course, those developers or vendors for whom every change seems difficult and costly). 

And experienced UEs also appreciate that negotiation is an inevitable part of any process. They pick and choose their battles. No use falling on your sword for a missing comma or a particular shade of yellow.

Those kinds of UEs also realize that not everything is worth changing. In fact, I think there’s no surer way of showing your greenness than by expecting that all issues are equal, and that you will get your way with everything that came up in the test just because … it came up in the test. 

(That last bit is especially a problem for UEs who think a report is a simple dump of everything that happened. I’m always amazed at the number of UEs who seem to feel obliged to report one-offs or simple, straight-up observations. Yeah, that’s interesting – especially if you’re a researcher – but may not be that actionable to your audience.)

So, all we ask is that you seriously consider what we heard back from YOUR USERS!  Don’t like our suggested fix? That’s fine. Do address it somehow though. Got a good reason for going with something else? No problem. Do tell me a little more about that though please.

I once did an audit (at a former company) looking at which clients actually acted on issues and suggestions from test reports and which did not. Interestingly, the one client who complained the loudest and dragged their heels the most were the ones who made the most changes. Conversely, the one who seemed the most enthusiastic, spoke our language, and got along with us the best rarely made any.

In other words, the latter seemed to think that simply running a test was what usability testing was all about. Maybe it was an exposure thing. Maybe it was interesting in itself, but not really worth getting all worked up about. Maybe it was just magical thinking. 

The former, though, realized that their job was just beginning after a test was over, and that they were actually going to have to roll up their sleeves and do some hard work. Funny … Looking back, I think I actually preferred working with those guys.



Thursday, May 30, 2019

The most important consistency is consistency with user expectations. (Bill Buxton)

So, internal consistency is definitely important. And consistency to standards is huge as well.

But there is one final consistency to keep in mind. And that is consistency with the user’s mental model and with their own actual experiences. And that consistency just so happens to trump all others.

Let me give you an example. I work a fair amount with mobile teams. And, for some reason or other, those teams include a lot of Apple fan boys and girls. Whatever the issue, everyone always seems to defer to Apple standards – even when those standards are woefully inadequate or just downright wrong. It didn’t matter if 10 out of 10 users had the same problem with something that met standards, the team just wouldn’t budge. 

Another example comes from accessibility. I’ve worked at a couple of companies that took accessibility very seriously. And that means that they didn’t just stop at following all the WCAG guidelines. I found very early on in my career that those guidelines are necessary, but not sufficient. Testing with disabled users typically uncovered issues that, though the team had followed all the guidelines, could still stop them in their tracks. 

In general, I am a huge fan of standards. Users do not want to think (thanks, Steve Krug), and predictability is often their best friend online. 

In some cases, though, there is something else, something important that needs to be addressed but can often be overlooked. And that is the actual users’ experience. Trying to shoehorn these users into something that just doesn’t fit their own experience and way of thinking is going to cause no shortage of blisters, cramps, and sore feet. 


Canadian Bill Buxton is one of the early pioneers of HCI
(and is currently at Microsoft)

Wednesday, April 17, 2019

Easy is hard and hard is easy. (Unknown)

In particular, what’s easy for the developers, or the design team, can often be hard for the user. If you didn’t put some time into knowing who your users are, put plenty of thought and work into your design, and then test it to make sure it actually works, you’ve left the effort mostly on the user’s shoulders. They’re the ones who will have to figure out what you’ve slapped together. 

Conversely, what is easy for the user typically had a ton of work put into it. Systems that truly understand their users’ needs and wants and address them directly and elegantly don’t just happen on their own. You first have to make a concerted effort to find out who your users are, what they want, and how they operate. You then to have translate all that into an experience that will, if not delight them, then at the very least not draw negative attention to itself.

And that’s the interesting thing here. You can put a ton of work into designing something, and then count it a roaring success if the user never even notices it. Users come to your website or use your software to get something done. If they’re able to accomplish their goal without too much fuss, you've won!

(As a usability engineer, I’m always struck by how brief users are when something goes smoothly – “That was nice,” “Pretty easy,” “I like that.” On the other hand, users are never short of words when something doesn’t go right.)

If, however, your UI frustrates them in their goals, you have effectively made them work for it. And people don’t like that. Humans are inherently lazy creatures. Sure, we can accomplish quite a lot if we put our minds to it. Figuring out a poorly-designed system just to buy or sign up for something is not typically where we would care to do that.

Training for a marathon? Sure. Learning a second language? You bet. Spending 30 minutes trying to figure out how to expense that last business trip or register your car? Definitely not.


For some reason, this quote seems to be popular with gamers

Monday, March 25, 2019

We rushed our redesign, solving one problem but creating many others. (Evan Spiegel)

Ah, the wonderful world of unintended consequences. One of my favorite topics.

Now, if you’re not familiar with Evan Spiegel, I’ll just let you know that he’s the CEO and co-founder of Snap Inc., the company that brought us Snapchat. He was also the youngest billionaire in history, at age 25, back in 2015.

What is he talking about? Well, back in the last quarter of 2017, Snap released a new design of their popular app. To make a long story short, it didn’t work. I’m talking losing 3 million users, a dip in brand impression from 30 to 8, and ad views and revenue going down 36%. Ouch!

I won’t go into the gory details of what they actually did, but suffice it to say that it was a perfect example of unintended consequences. I mean, the company certainly didn’t mean to lose all those users and revenue now, did they?

How did it happen? Once again, I won’t go into the details. Heck, I don’t even know them (though I certainly could guess). I looked around, but I’m not sure that’s something the company felt comfortable sharing.

Now, my first question, as a usability engineer, is, of course … DIDN’T ANYBODY DO ANY TESTING?  How did they know this redesign was going to work? Where did their feedback come from? Did they get any feedback?

Now, Spiegel points to rushing things, but I think there’s a lot more to unintended consequences than just that. Hubris, for example. Or denial. Or short-term thinking. Or what have you …

Now, of course, you can put a lot of thought into something before you attempt it. Honestly, though, I don’t think you can still predict everything that might happen. And that’s why I think it’s so important to get something usability-tested. A/B might work, of course, but that means it’s live. Testing lets you get some feedback in a “safe space.”  Might even give you some idea why it wasn't working.

Whatever you do, though, do something! Get some kind of feedback!


For some reason, most pix of Evan also feature his wife

Thursday, March 7, 2019

It’s not what you don’t know that hurts you. It’s what you know that ain’t so. (Will Rogers)

Sometimes I prefer my clients to be ignorant. In my experience as usability engineer, user researcher, and even instructional designer and tech writer, I’ve just found it so much easier when my clients are blank slates.

Of course, clients who really know what they’re doing are the best of all. Those, however, are pretty few and far between. It’s much more common to get somebody in the middle. And, to throw in another quote, “A little learning is a dangerous thing” can definitely apply to that middle zone.

I’m always amazed at how many things people “in the middle” just don’t get, or don’t get right. There’s probably a number of reasons why however.

The first may simply have to do with exposure. Clients may have simply heard of an idea only in passing – a mention at a conference from last year, an article they once read, a conversation in the hallway. In those situations, subtleties and true understanding are really hard. Overly broad strokes and misconceptions, on the other hand, are really easy.

Usability engineers always know, though, that “it depends,” and things are never as simple as they appear. A good example here might be the number of clicks rule. A couple of years back, marketeers fell in love with the idea that everything should be a couple of clicks away from the homepage. On the surface of it, this actually made some sense.

Some unintended consequences of that, however, included some particularly dense homepages and menus. Further, results from testing showed that users didn’t really resent (or were even aware of) the number of clicks, and were happy to go their merry way as long as they felt confident where they were going. Ironically, this idea of “information scent” could be stronger with a deeper IA than with a shallower one. 

It may also have a lot to do with where you got started. A perfect example here is personas. If you have some background in marketing, when you here the word “persona,” you automatically translate that into “segments.” It’s not the same! And it can be really hard to adjust gears and understand the difference. 

That actually reminds me a lot of the linguistic concept of false friends. Not to go too far off topic, but that’s when a word you know in English, say, doesn’t mean the same as a word in another language that sounds just like it. For example, don’t say you’re embarazada the next time you slip up with something with your Spanish-speaking friends. It means “pregnant”! 


Will Rogers doing a little man-machine research 
on some early radio communications hardware

Thursday, February 28, 2019

It is not how much information there is, but rather how effectively it is arranged. (Edward Tufte)

I agree with this statement up to a certain point. Of course, in a typical situation this is spot on. If I had a dollar for every time I’ve flagged “grey blocks of text,” I’d be retired now. Honestly, it’s funny how frequently and consistently that comes up.

I never really tried to analyze why, but off the top of my head, I would guess it’s how all writers are trained. And that goes all the way back to grade school. Think about it. What did Miss Thistlebottom at Woodrow Wilson Elementary School want? Big grey blocks of text – each preferably with topic and summary sentences (and none of those beginning with a conjunction or ending in a preposition)!  And something similar continued through middle school, and high school, and college. The whole point was to show that you had understood a particular topic by writing about it at length. You were also trying to impress teacher with just how darn smart you were. Wordiness was something to be encouraged. Long paragraphs and sentences were a good thing.

I’ve actually seen something even for college-level students who were in writing programs. And, here, I don’t mean creative writing, but journalism, and marcomm, and even professional writing. A newspaper article, a marketing brochure, a press release are not, however, all that different from what they were writing back in 6th grade – at least in terms of structure and look, if not in quality.

Everything, though, changes when you go online. We all know that people don’t want to be made to think (thanks, Steve Krug), but it just so happens that they also don’t want to be made to read. Instead, they are much more likely to want to scan and skim

Now, I’m not necessarily talking about an online article here (in those situations, readers are apt to “scan and swoop”). What I’m talking about is someone trying to complete a task – sign up for something, shop for something, pay for something, find some bit of information, make a decision, do something other than just read for pleasure.

In those cases, readers will scan and skim. And the smart writer will be sure to support that strategy. And that includes using lots of chunking, plenty of lists, more titles than seem necessary, and some way to emphasize keywords. There’s no better description of that strategy (and why it’s so necessary) than some research that Nielsen Norman started doing way back in 1997.

Returning to Tufte’s maxim … 90% of the time, he’s got it nailed. In some cases, though, there really is just too much stuff. I know. I’ve seen that too.


I’ll bet Tufte never thought in a million years that 
he would be roped into a discussion about writing

Tuesday, January 22, 2019

A wonderful interface solving the wrong problem will fail. (Jakob Nielsen)

In other words, there is a big difference between usability and utility. 

Say you test a new system, and it performs well. It’s then released … and sinks like a stone. What just happened?

Sounds like something that worked just fine, but didn’t actually solve any user needs. In other words, it’s usable, but useless at the same time.

So, should usability engineers be testing for both conditions? Isn’t our job merely a matter of the former, and not the latter?

Well, if you really want to add some value to what you do, though you can concentrate on usability, you need to keep your antennae out for utility as well. An obvious way to go about that is to ask upfront. 

The SUS questionnaire, with its first statement being “I think that I would like to use this system frequently,” is perfect for that. Just be sure to follow up. Another thing you can do is to just ask straight out in the debrief. A question like, “Is this something you would use?” should work just fine. 

One more thing to consider is to include self-directed tasks. In other words, instead of a super-specific task that asks the user to, say, find what date check 1338 cleared, you could start out by asking them if they use checks. If they say no, you now have some information about the utility of that particular feature. You can also, of course, still ask them to complete the task (“for the sake of this exercise,” I usually say). An additional benefit of this approach is that it keeps users task-oriented, but also gets them to start thinking about the value of what you’re asking them to do. 

Now, is there a way can you stop such problems from happening in the first place? Well, it’s really not part of a usability test, but if you can get involved upfront – doing ethnography, in-depth interviews, focus groups – these are obvious places to address the utility question and to get at users’ true unmet needs.

Heck, why go to all the time, expense, and trouble of solving a problem that doesn’t even exist? Surely, there are more than enough problems out there that do.


Friday, January 11, 2019

There is no such thing as a “user error.” (Anonymous)

Boy, do some users love to blame themselves. You know the type … “Oh, I’m not very good with computers.” “You know, my husband would be a lot better at this.” “Man, that was dumb of me.” In my informal taxonomy of user types, I call this type the “Charlie Brown.” 

I usually just tell them something along the lines of, “Oh, no, not at all.” If they persist, or if it really seems to be an issue, I do the traditional spiel about “this is not a test of you.”

If that doesn’t seem to be getting through, though, I usually follow that up with:

“This is a test of the system. You can do no wrong here today. If there’s a problem, it’s a problem with the website [or app or whatever], and I want to hear about it.  That way, we can fix it up, and it won’t be a problem for others.”

I might also talk about how they were recruited just for this test and are the perfect person for it, that the system was designed for someone just like them, and that I’ve got 10 other people coming in that week who are exactly like them. I don’t like to do it, but if the user is really struggling with this issue, I might even go so far as to mention that other users had exactly the same problem.

Now, on the other hand, there are also some team members who love to blame the users as well. With them, my approach is a little bit different. ;^)  I might start out by cocking my head, frowning, and giving them the eye. If that doesn’t work, I usually point out that this person is a customer. Now, I might also sympathize a little with them by confessing that the user was difficult for me as a facilitator and that they were definitely on one side of the sophistication scale. That said, I also try to firmly get across the fact that this is a real user and needs to be addressed somehow in the design.

And if that doesn’t work, I have no hesitation about reading that observer the riot act. That usually involves reiterating that the “user is not you,” there are many different user types, empathy is a sure sign of a good designer, and – finally – these “idiots” also just so happen to be paying your salary.