Wednesday, December 14, 2016

My friends tell me I always point out problems but never offer a solution, but they never tell me what to do about it. (Dan Gilbert)

I see myself as a professional scab picker. If there is a problem with your design, I’m the one who’s going to pull that scab off and make it bleed. I can do that in an eval or – better yet – let your users do that for me in a test.

So, does that make me a popular person? Well, not exactly.

Are there some things I can do to make that hurt just a little bit less? Why, yes, there are.

One thing I always try to do is to provide good results as well as bad. I’ve already written a couple of posts that address that issue (Even developers have feelings …, Don’t fear mistakes …, A successful test is one that …). 

Another, though, is to offer some solutions. If I’ve spent hours upon hours preparing for this particular test, running it, watching the tapes, sifting through all the data, then summarizing it all up in a way that makes sense to anyone, chances are some ideas are already going to occur to me. And seeing as I’ve been doing this testing thing since practically the dawn of time, I may have already run across this problem and seen a decent solution to it already.  So, why the heck not share any possible solutions I may have come up with?

Now, at the same time, I am not a designer. I may also not be totally privy to what’s already been considered and thrown out, what might not work from a business standpoint, what our competitors happen to be doing, some elegant solution that someone on the team saw in a totally different context ... In other words, I really don’t expect my solution to be adopted without any further debate. 

That said, I have, over the years, been able to cut to the chase in a few rare situations and basically offer up something that the team can adopt pretty much ready-to-wear. Saves a lot of trouble. Definitely cuts down on wheel reinvention.

Overall, though, all I really want to do is just get the ball rolling. And that, in turn, is really just totally subsidiary to my real goal here – identifying what is an actual problem, why it’s a problem, and how seriously a problem is actually might be. 



I just can't tell if it's this guy ...



... or this guy

Tuesday, November 29, 2016

A user interface is like a joke. If you have to explain it, it’s not that good. (unknown)

I’ve been doing this UX thing for about 30 years now.  In the beginning, there was a lot of explaining.  Believe me, green-screen, mainframe systems needed it badly. Most of them came with a veritable library of manuals. In fact, that’s how I got my start in this business – writing paper manuals.

My next big thing, though, was online help. Now, that was a real step up from paper manuals. At the same time, though, I found there were two different styles of help – one that was really helpful and one that was basically just throwing a manual online. The helpful version made help contextual – for example, help for a particular field that was right next to that particular field, or help for a particular page that actually appeared on that page. The unhelpful version took the user to a totally separate system, where they had to browse through what was basically a table of contents, or to scan what was basically an index. 

As the industry matured and matured, moving from software to websites to apps, I noticed that things seemed to be getting simpler and simpler – and less and less in need of explanation. It appeared that we were finally and truly moving to that holy grail of “intuitive obviousness.” 

Recently, though, I’ve noticed things taking a step back, returning to the bad old days of explanation. What I’m talking about here specifically are things, which usually appear in apps, called “coachmarks.” They’re those little boxes that might appear when you first log in, with little arrows pointing to and explaining not-necessarily-totally-obvious features. 

Now, there are some good reasons for these. For one, small screens simply have a lot less space. And that means that there might not be enough room to spell everything out. We can hope that users will explore a little, but we can’t always count on it. So why not help out by exposing a few things, right?

There are, however, also some bad reasons. For example, some apps might be trying to do too many things, and simply need to be scaled back. Some might also have too heavy an emphasis on graphic design. Left to their own devices, I’ve noticed that graphic designers sometimes opt for the “cool” and “slick” over the explicit and obvious. “Affordance” probably needs to be more of a part of those designers’ vocabularies. 

This is especially a problem when design that works for the small screen is ported – pretty much without any translation – to larger and larger screens. For example, why use a “hamburger” menu when you’ve got plenty of room to actually spell it out? As Luke W (or was it Nielsen Norman?) pointed out, “It’s mobile first, but not mobile only.” But that’s a great topic for a whole other post.


It's like a mini manual!

Thursday, October 6, 2016

To err is human. To really foul things up, you need a computer. (Paul Ehrlich?)

This one dates all the way back to the 60s. I can picture it hanging on somebody’s cubicle – maybe somebody with a white short-sleeve shirt; tie; pocket protector; dark, chunky glasses; and a lame attempt at sideburns.

Sadly, it’s still pretty accurate. Yes, we’ve come a long way [baby], but there are a number of factors that probably mean that this one will be with us always.

First of all, humans are still a lot better at certain things than computers. Now, a computer might be quite good at searching a huge database, or crunching numbers, or analyzing chess moves. Humans, though, are equally good at nuance, and vagueness, and emotion, and a different kind of complexity. Yes, with AI and machine learning, this is rapidly changing, but when it comes to the Turing Test, my money is still on the human.

Second, we need to remind ourselves that humans are still largely in charge. And what that means is that they still do the coding, and the design, and the requirements, and the QA – and basically put the darn things together. So, there’s still plenty of room for these error-prone humans to leave things out, to add in bad things, and to generally screw up the interaction between human and computer. It’s how our field got started after all. And – once again – I don’t see this going anyway anytime soon.

Finally, it seems to me that computers may also have finally passed a certain threshold. To me at least, they appear to be too complex for us mere humans to predict what they will do, how we should interact with them, what can go wrong, and how to fix them. Theoretically, we can get to the bottom of it all, but there are typically so many things in play, that it may take a lot of effort – perhaps even an infinite amount of effort – to really figure it all out.

There’s actually a book out there that speaks very specifically to this.  It’s called Overcomplicated, and is by a Silicon-Valley-type named Samuel Arbesman. I haven’t gotten to it yet, but it’s #1 on my list. It promises to “offer a fresh, insightful field guide to living with complex technologies that defy human comprehension.” Can’t wait!


Love the sideburns!

Friday, September 2, 2016

Usability bugs don't crash the system; they crash the user. (Jurek Kirakowski)

I was lucky enough to get in on the ground floor in my profession. My grad school, Carnegie Mellon, can make a legitimate claim to having invented the think-aloud method. My first company out of school, Digital, was one of the first companies to really take usability seriously. It’s where Jared Spool, Steve Krug, Karen Holtzblatt, Dennis Wixon, and Chauncey Wilson all got their starts.

Unfortunately, Digital also famously bet against the PC. So, after five wonderful years there, I found myself in a wonderful city (Charlotte, NC), but one that was not exactly Boston, or Austin, or San Francisco when it came furthering my career. Wanting to stay, I took jobs as a tech writer and instructional designer, but was always on the lookout for my next usability gig.

Sadly, that involved many interviews where I had to explain exactly what usability was. In particular, I had to explain how it wasn’t QA. That was what most people were familiar with, so I typically had to tie the two together somehow to explain what it was I actually did. 

I usually made the point that QA usually looks for things that “crash the system.” Usability, on the other hand, finds things that don’t necessarily crash the system, but might as well have. In other words, if the user can’t find your link, or doesn’t understand what to put in a field, or never goes to the proper menu item because it’s worded wrong, the effect is the same – the user can’t complete their task; ergo, the system failed.

Unfortunately, having to offer this explanation was also a signal to me that this place might not be able to best use my skills. And what was particularly frustrating was when the job description included the word “usability” in it – without HR, or the hiring manager, or whoever actually knowing what they were really talking about.

Yes, this story does have a happy ending. I finally found someone who did know what a usability engineer was – and also needed one desperately. Happily, they were also one of the biggest banks in the country, and I started their usability practice. Interestingly, though, they also no longer exist either. Hmm, you don't think it’s me, do you?


Jurek Kirakowski - professor, author, father of SUMI, and fashion icon

Tuesday, August 16, 2016

We'd rather people are bored than confused. (Joe Lanman)

And that’s probably what makes usability engineers different from marketeers, or venture capitalists, or hot-shot designers, or budding entrepreneurs …

There is an incredible attraction to the bright, shiny object; to sizzle; to wowing and delighting your users; to making a splash in the marketplace. And there is definitely a place for that.

It is, however, a pretty high-stakes game. If you do it right, you might indeed achieve all those goals. If not, though, you might well fall flat on your face.

Now, here’s the rub … Not every user wants to be delighted or wowed, especially when they are simply trying to complete some basic task – buying something, looking up information, making a reservation, getting the balance on their bank account. Usually, they just want to get that task done, and without too much effort.

In that regard, boring can be a pretty good bet. Perhaps your interface doesn’t really need all those gizmos and gadgets and cool design trends you saw on those apps you and your friends were sharing the other day. 

Here’s the question you need to ask yourself … Are my innovations helping the user complete their task, or are they simply getting in the way?

Two great ways to accomplish the former are 1) to give the user functionality they never had before, and 2) to make your UI as clear and simple as possible. Examples of the former abound – Uber, eBay, Amazon, Venmo, Tinder … Examples of the latter are not as obvious, but there are still plenty out there (Google is always my favorite). In fact, a lot of real winners manage to do both at the same time.

On the other hand, one great way to get in the user’s way is to design your site, app, whatever around those gizmos and gadgets and cool design trends just because you think they’re innovative in themselves. They’re not. True innovation comes from solving user problems and then just simply getting out of the way.



Joe is an interaction designer for the UK Government

Monday, August 8, 2016

The test must go on. (Laura Klein)

What an incredible production a test can be. You’ve got to recruit the users, make sure the prototype is working, set up the meeting invites, get your test plan approved, reserve the lab, set up the technology, run your pilot, put your material together, get some water, get those gift cards ... 

It’s a lot like putting on a play. And, like a play, when the people start showing up (observers and users, in our case), there’s absolutely no backing out.

(Even when I’ve actually had my project team pull out [because of major switches in strategy, prototypes that just won’t work], I’ve still been able to get something out of it. Typically, I’ll turn it into an interview, or maybe a test of what some competitors offer, or maybe a test of something totally different. But with all the users recruited and paid for, you really need to do something.)

So, with all this complexity, it’s inevitable that something will go wrong. I don’t think I’ve ever had a prototype that hasn’t had some glitch in it. Heck, I’ve even had production systems go down on me. As for users, there will always be no-shows, or poor recruits, or late arrivals, or the ones who just won’t talk. On the technology side, cameras sometimes don’t work, feeds go black, and recording software crashes. And all that’s not even taking into consideration user error – i.e., the poor facilitator who’s trying to do a million things at once. 

The important thing to realize, though, is that every test is going to have some issue. At the same time, however, you will still get tons of great data. Now, some of that data might have to be thrown out, or some of it might have to be taken with a grain of salt, but it is truly amazing how much even the most imperfect of tests will give you.

The real challenge often is getting your observers to understand all this. And, sometimes, that starts right off the bat. One thing that I like to tell them is that “I’ll test whatever you can get me” and that “I can guarantee we’ll get something out of it.” Overall, though, my goal is to get them to relax, let the test happen, and concentrate on the results. 


Laura is the president of the wonderfully named Users Know, as well as the author of UX for Lean Startups and Build Better Products

Wednesday, July 13, 2016

A word is worth a thousand icons. (Jeff Atwood)

My variation on this is that “a word is worth a thousand pictures.” In context, I think it’s more clever. The other, though, gets a little more directly to the point.

And what would that point be? Well, it actually reminds me – the former writer and linguist, that is – of acronyms. We all know what acronyms are. We may not, however, be able to state exactly where they come from, what purpose they serve, and how and when they can be abused.

An acronym is really just a way to speak more efficiently. If, for example, you are in the national security industry, you’ll probably be talking a lot about the Central Intelligence Agency, and the National Security Agency, and maybe even the Komitet Gosudarstvennoy Bezopasnosti. How much more efficient to just say the CIA, the NSA, and the KGB.

Now, the problem arises when someone doesn’t know what those acronyms stand for. You, for example, probably know what an IA is, or SUS, or the K-J method, or MVP (and I don’t mean “most valuable player” here). Does your spouse though? Well, then how about your mom? Now, how about BABIP, VORP, DIPS, and WAR? Unless you’re a baseball nut like I am, those probably didn’t mean a darn thing.

And that’s the thing about icons. They act a lot like acronyms in that they allow you to communicate a lot of information in a very small space … unless they don’t, and then they don't really communicate anything, and fail miserably.

Now, some icons are pretty obvious. A printer for a print icon, for example, is something that pretty much everyone’s going to get. And there are also icons that, though they are not intuitively obvious, people have definitely learned over time. The perfect example is the floppy disk for saving. I mean, honestly, when’s the last time you used one of those? On the other hand, have you ever had any issues clicking on that to save something?

The problem arises when the icon is neither obvious nor learned. And that’s why I tell my project teams to add in a label, when they can. Of course, there are times when there isn’t room enough, especially on smartphones. You’d be surprised, though, how rarely that is actually the case, and how often you can indeed fit those words in.

A special issue arises when icons – and acronyms – are used not for efficiency’s sake, but for something much more nefarious. To return to acronyms for a second, those are famously misused to signal membership in a special club and to exclude others. How many times have you been in a business meeting, or talking with techies, and wondered what the heck they’re talking about? A similar thing happens with icons as well.

In particular, I’m sometimes struck by how readily graphic designers will resort to them. I’m also often struck by how coming up with icons seems to function more as an exercise in aesthetics than as an effort to really communicate with users. The icons that result are invariably “cool,” and “slick,” and “on brand” – and admired by other graphic designers. Often, though, the user may have no clue what they’re for.


Jeff is a developer and one of the founders of Stack Overflow.
It’s good to see that even developers get it too.

Tuesday, July 5, 2016

The fact that surprising results happen is exactly why we conduct actual research. (Jakob Nielsen)

I’ve been doing this for 30 years now. I’ve got over 3,000 users under my belt. I also just so happen to have been in the same industry for 20-some years. And I’m still surprised by at least something on every test I run.

Now, maybe that’s because I’m particularly dense, and haven’t been able to learn much over all those years and users. (And it is true that there are plenty of things that I am not surprised by – things that I see over and over.)

But if you think about it, there are a ton of reasons why even Jakob Nielsen would share my same sense of surprise. First, every test is different. Even if you’re doing a second round on the same stuff, you’ll at least find something new on those changes you made from the first round, right?

Second, technology is constantly changing. I actually started out testing mainframe green screens. These days, I’m testing voice-activated applications on smart phones. Who woulda thunk it?

Third, people change as well. Though I am a firm believer in another of Nielsen’s quotes, the one that states that “technology changes; people don’t,” I still have seen many changes over the years in users as well. In fact, I think it would be pretty darn interesting to take those users I tested on mainframes and plop them down in front of an iPhone and see what happens. Yes, the basic human hardware of brains and fingers and eyes has not – and will not – change, but users’ experiences and cultural contexts certainly have.

Most importantly, though, usability testing takes what may be the two most complicated things around – humans and computers – and throws them together. With that formula, there’s absolutely no telling what’s going to come up.

But that’s what makes it so fun. Honestly, the fact that tests uncover surprising results is why I’m still around.  If I wasn’t getting surprised and learning something new on each test I run, I probably would have quit a long time ago.


“If you don’t get any unexpected findings from usability testing, you didn’t run the study right” is another of Nielsen’s great quotes




More from Jakob:

Friday, July 1, 2016

A checklist does not an expert make. (David Travis)

Whatever you call it – usability evaluation, expert evaluation, heuristic evaluation – what it is is a particularly cost- and time-effective way to make some improvements to your system. If you think about it, there’s usually enough low-hanging fruit on any system that having someone who knows a little about usability take a quick look at it is never a bad idea. Personally, I’m a huge fan.

But, how exactly to go about doing one? Well, there are two basic approaches. One is to use a checklist; the other is to use an expert. Often, the first one is what you use when you don’t have the second one around.

Over the years, though, I’ve found that there are some definite issues with that first approach. In particular, I’ve found that checklists are either too vague … or too specific.

The vague ones – like the famous 10 from Jakob Nielsen and Rolf Molich – tend to have way too few guidelines. Though these kind of guidelines may apply to many situations, they are typically pitched at such a high level that anyone who is not an expert may have trouble applying them to the very specific situations that are involved in an actual eval. Using a checklist like this, a non-expert is very likely to get lost in the forest.

The specific ones, on the other hand, tend to have way too many guidelines. Though they can be very specific, that same granularity means that it can be very tedious to go through them all, applying them to each screen and interaction in a real system. They also may miss things that involve multiple elements or that occur at a higher level than pixels and code. Using a checklist like this, a non-expert is very likely to get lost in the trees.

So, a true expert review is usually my go-to for these things. Note that that usually means expert in usability and in the particular subject matter domain as well. And all that comes about from running tests, analyzing data, doing interviews and focus groups, and all the many things that separate someone who knows what they’re doing from someone who's just giving it a shot.  


David Travis is a consultant based in the UK
(and has some other great ideas about evals in this blog post)

Monday, June 27, 2016

Nobody cares about you or your site. What visitors care about is getting their problems solved. (Vince Flanders)

This one’s a bit harsh, but I really believe harshness is what’s called for in this situation.

So, who should we start with?

How about graphic designers? For them, it seems like a lot depends on how long they’ve been out of school, or alternatively how artsy their program was. If it’s “not much” and/or “a lot” for those two questions, you may have a problem on your hands. What that actually equates to, often, is that the graphic designer will be creating something, not for the user, but for their portfolio instead. Scratch most graphic designers, and you’re likely to find someone who would really rather be drawing comic books or painting masterpieces, and is really only pushing those pixels around just to pay the bills.

Writers? Well, they might be even more problematic. Chances are, even if they came from a tech writing program, they got into the field because of their love of creative writing. Scratch most writers, and you’re likely to find someone who works on the Great American Novel in their spare time. Now, it’s not that they’re concerned about their portfolios so much this time, but that it can be a lot harder for them to get jazzed up about short, sweet instructions instead of short stories and sonnets.

A similar problem comes with writers with a communications or journalism background. Writing press releases about organizational changes or articles about city council meetings is not always the best preparation for trying to explain a website feature or sell a product. Yes, copywriting skills are very translatable online, but whatever you happen to be writing, you have to realize that, online, it’s a whole different ballgame. In the online world, the less there is, the better. That can be a pretty hard adjustment to make for anyone who ever had to work with word counts.

A final group you might have issues with are developers. They’re usually pretty good at just building what they’re supposed to, but sometimes they will fall in love with some widget or gizmo or some special way to code something.

In general, anyone on the project team – information architects, interaction designers, business analysts, clients, whoever – can become a little enamored of what they’ve come up with and lose sight of the fact that there is a purpose to the website, and that that purpose is not necessarily to create something “cool” or for you or the team to look good.

For all these groups, what’s needed is a realization that doing their particular thing is even more exciting when there are some constraints involved (with numbers one and two being what the users wants and what the business wants). That, though, makes the problem bigger, and more challenging, and – ultimately – more satisfying when you finally nail it.

A final thing to realize is that – and this may be the hardest thing to get comfortable with – is that you will ultimately be successful only when nobody notices you. Success is when your graphic design allows users to successfully pick the product they want from a comparison chart, your content lets users complete this from so they can sign up for that service they need, or really whatever you come up with simply lets users do what they came to your site to do.


The man behind the wonderful webpagesthatsuck.com

Tuesday, June 21, 2016

The only thing more expensive than hiring a professional is hiring an amateur. (Red Adair)

Not too many years ago, you needed some serious chops to pass yourself off as a usability engineer. In the beginning, you probably had a PhD, worked at someplace like IBM, and might even have had a white lab coat hanging in your closet somewhere. A little later, as usability really started to take off, you didn’t need so many formal trappings, but training at a real industry leader – like a Digital or a Fidelity or a Sun – was almost a prerequisite.

And then something funny happened … It was the days of the dot-com bubble, and all of a sudden, usability people were – well, not exactly rock stars – but well-known and somewhat respected people. Average IT and business people knew who Jakob Nielsen was, threw the word “usability” around right and left, and were willing to throw some money around as well to make their products “user-friendly.”

So, as it often does, the market responded by magically creating a supply to meet that new demand. People started sprinkling the word “usability” on their resumes, making sure they dropped it in interviews, and maybe even giving one of those usability test thingies a whirl all on their own. They didn’t necessarily have to study it in school, or train at an industry leader, or even read a book or go to a conference. Hanging out your shingle and making the claim that you “did usability” was often enough. Heck, the people hiring them had a hard time understanding the difference between usability testing and QA, so it wasn’t too hard to fool them.

Heck, it was just as easy to fool yourself. You have to actually know a little bit about a topic before you realize you don’t know squat. And that’s kind of what happened with this new crop of “usability engineers.” Now, some of them turned out to be great – they had the skills, and the motivation, and made an effort to educate themselves, and gained some real experience. But a lot of them were just awful.

I know. I saw them. I saw tests that were more like interviews and – in some especially awful situations – product demos. I saw facilitators who talked more than the users. I saw leading questions and body language that practically shouted. I saw tests that were run on the fly, and reports that were little more than transcripts. I know. I was there.

Now, usability is famous for abjuring purity in favor of results. In fact, this approach is really what’s behind Steve Krug’s Rocket Surgery Made Easy. Personally, I’ve often said that any test, no matter how quick-and-dirty or informal it is or how many things go wrong, is still going to give you some useful feedback.

The fact of the matter, though, is that there is a certain baseline to all this. No, you don’t need a famous brain surgeon to stitch up the slice on your hand you gave yourself working in the yard. But it’s not something you’d usually do yourself, is it? Chances are you going to head somewhere where someone with at least a little training and practical experience can set you to rights, right?


Yup, that’s what Red did.
Usability testing isn’t exactly in the same ballpark, but what the hey …

Tuesday, June 14, 2016

The fanatic preparation for the possible has the inevitable consequence of obscuring the probable. (Alan Cooper)

Ah yes, edge cases. AKA designing for the exception. I think we’ve all been there, right?

So, your project team has come up with simple, elegant design to – oh, I don’t know – make a payment of some sort. And now you’re presenting it to your client; or the higher ups; or your peers in IT, QA, legal, or what-have-you.

Holy cow! The things they come up with. What if it’s a bank holiday? What if the dollar amount goes into 12 figures? What if the user has more than 20 accounts to pay from?  What if it’s a full moon in a month with an “r” in it and Mercury’s in retrograde?

But that’s their job, isn’t it? Business analysts, QA types, developers, and others are typically focused on the far parameters of a system – how long should the field be, what if the user enters a date in the past, how many of this should we allow users to set up, how much of that do we need to say to be within the law?

Your job is to allow them to do what they do, but to be sure that the team also keeps the normal, the average, the expected in mind. Yes, what they’re concerned about will come up. They need to realize, though, that it will come up a lot less than they think it will.

They also need to realize that their edge cases do not need to drive the design. In particular, they need to be aware that, if they do insist on doing that, there will be consequences. Adding things in for once-in-a-blue-moon occurrences do not come free of charge.

That little change of theirs may, in fact, trip up all the other users who it actually doesn’t apply to. Those users may wonder why that part’s there, or if applies to them, or why you’re so bent on throwing all this extra stuff at them. In fact, those little changes may add up, and have a much larger impact, on a much larger group of people, than leaving all those edge cases out entirely.

So, how to solve for this? Well, just having these folk’s consciousness raised is sometimes all that’s needed. Pretty much everyone’s familiar with the old 90/10 rule, so I’ve found it’s always helpful to invoke that in these situations.

Another thing that I swear by is progressive disclosure. And all that means is that the user doesn’t have to see everything at once. You could, for example, hide the details behind links, or not show something until the user gets to a particular point.

Once again, though, just getting your team to realize that adding all those exceptions in does come at a price may be all that it takes to make them stop and think a little.




Alan also said:

Tuesday, June 7, 2016

People often test to decide which color drapes are best, only to learn that they forgot to put windows in the room. (Steve Krug)

Over the years, I’ve actually found that this is more of a problem with more experienced clients and observers. Newer ones are often happy to just be there. They often have no idea what to expect from a test. More experienced folks, however, often go into testing with preconceived notions of what the problem is and what testing is likely to find.

Here’s how it typically goes … One of these more experienced people will come to me and ask for a usability test to get at some minor detail in some design their team has been working on. To them, everything else is perfect, but there just so happens to be this niggling little detail that the design team just couldn’t get closure on. That's all it is ...

So, the first thing I normally do is try to get the client to step back a little bit. I usually just ask what the design is for, what it lets the user do, how it’s supposed to work.

I then get the client to agree to making the test more task-based. And that’s what usability tests do best – getting real users to sit down in front of a system and try to do real tasks. Now, that will get at the particular thing the client is interested in, but it will also get them feedback on the rest of the design, on the user’s whole experience, on things that might never have even occurred to the design team.

The final thing I have to do is to get the client to have an open mind.  ;^)

It really is a little ironic that I’m the one who always expects to be surprised. I mean, I’ve been doing this for almost 30 years, and spend at least 100 hours every year one-on-one with users. I should be the last person to be surprised.

That actually, though, may be the favorite part of my job. I’m always learning something new. In fact, if that wasn’t the case, and everything was predictable, I probably would have left the profession long ago. Maybe it’s because I’m basically a scientist at heart, but I always approach any test with an open mind. I really can’t wait to see what turns up. Surprise me, users!


Wednesday, May 25, 2016

Many innovations fail because consumers irrationally overvalue the old while companies irrationally overvalue the new. (John Gourville, Harvard Business School)

Usability engineers often work as ambassadors. Typically, they act as go-betweens between a company and its users. In particular, they act as go-between between the users and the company’s design team (IAs, IDs, graphic designers, content writers, etc.).

And there’s the rub … Design teams are creative people. They tend to love the new. In fact, I might even go so far as to accuse these people of neophilia (and, yes, that is a real word).

And one of the things that these groups love in particular is technology. And the word for that is technophilia

Now, none of this is to say that the average user is a neo- or technophobe. In fact, I’ve found that, of every 10 users who come into my lab, at least one of them is going to be something of a neo-/technophile themselves (my observers always tend to really like these users, by the way).

In fact, you may well be in a job where your subject matter and users probably flip that 90/10 finding around. Now that, of course, is its own challenge.

For many of us, though, we are typically dealing with an audience that is less technically sophisticated than we are. Think about it …  We are a self-selected group. If you are an IA, or an ID, or what have you, chances are you probably like technology, can’t wait to get the latest gadget or app, and proudly call yourself an early adopter.

Our users, however, typically do not self-select in that way. Yes, they do come to us because we have something they want or need. But that thing is almost never the UI. What they really want is to buy the book, or watch the video, or look up that piece of information. If you have a great new idea that helps them do that, then great. If, however, your brilliant new idea gets between them and the completion of their task, watch out. 

Now, as a usability engineer, you’ve probably already worked all this out for yourself and everything I've said here is totally obvious to you. What your main job may be is getting the rest of the team on board as well. Good luck!


You can find the actual article right here

Tuesday, March 29, 2016

One aspect is common to all innovations – they solve problems. (Nir Eyal)

There are a lot of companies out there trying to be innovative. How they go about doing that, though, is what really matters.

Now, you may simply work at a company where everyone is a genius and comes up with at least a dozen brilliant ideas every day. I guess that means places like Apple and Google and FaceBook and Twitter … 

Unfortunately, though, it’s probably not the company you work at. I mean, we all can’t be astronauts when we grow up now, can we? Not that that’s going to stop a lot of us from thinking we work at Google, or that we’re the next Johnny Ive, or that we’re going to come up with the next SnapChat.

So, if we’re not crazy, brilliant geniuses, how do we come up with those great innovations? One thing I do not recommend is sitting around staring out at the window. Another thing I don’t recommend is the group version of this, a brainstorming session.

What I recommend instead is to try to identify what problems your customers or users are having and then try to solve for those. And one way you can do that is to simply ask your customers. Believe me, they’ll have a lot of good ideas.

At the same time, though, they may not be fully aware of all the problems they are having, or that something might be done in a better way, or that something might even be a problem. In addition, they might not be that good at articulating those problems, or prioritizing them, or identifying their root causes. So, customer feedback – though invaluable – will always be a partial solution.

One thing I’ve found particularly valuable for uncovering problems – and, ultimately, for coming up with innovations – is ethnography. Now, all that really means is “field studies” – watching people complete their own tasks in their own environment. These tend to generate a ton of data, data that covers the user’s whole experience.

Further, what these things are particularly good at is identifying the pain points, the gaps, the holes – even when users may not be totally aware of them themselves. And once you’ve identified those, it’s actually not that hard to come up with some pretty creative ideas – ideas that can address those gaps, ideas that not every other company might be doing, ideas that might be actually genuinely innovative.


Believe me, it’s a lot more than clever graphics involving light bulbs

Tuesday, March 22, 2016

Don’t fear mistakes. There are none. (Miles Davis)

I feel for my UX teams – the IAs, the IDs, the writers, the graphic designers, even the coders. They put their hearts and souls into their designs. It must really be hard to watch someone rip it apart or fail miserably trying to actually use it. And that must be especially difficult when you have to simply sit there and watch, with a thick piece of one-way glass between you and that user.

Something similar happens during any usability test report-out. Reports tend to simply report the findings – and not point fingers – but it still must be hard to hear. 

Now, personally, I do try to do some things in my reports that can help the team feel a little better. Probably the most important is to make sure there are positive results (ones to celebrate) as well as negative ones (ones to fix). 

I also do something similar during actual testing as well. Like most usability engineers (at least I would hope), I try to debrief after every user. These things – at least for the first few users – tend to be a little awkward. One way I’ve found effective to break the ice is to offer a few observations about things that worked well. That usually gets the ball rolling, and the team will naturally move on to the things that didn’t work so well.

A lot, however, depends upon the individual. Some people are just a lot more sensitive than others. I find that sensitivity is especially the case for newer team members and for those who are simply new to usability testing or UX in general. (Very experienced designers can’t wait to get into the lab. Their philosophy is generally, “Bring it on!”)

In fact, I’ve actually noticed that these newer team members often go through something not unlike Elizabeth Kubler-Ross’s Five Stages of Grief. The first step, for example, is usually a combination of Kubler-Ross’s first three – denial, anger, and bargaining. For example, observers might ask, “Where do you get these users from?” Or they might focus on issues with the prototype or with questions and carping about methodology. The best way to handle observers at this stage is to just get them to not over-react (even if this means allowing them to vent a little) and to make sure that they come to some more tests. 

A second stage – after a number of users or even after a number of tests – is often something not unlike depression. When that happens, I try to be supportive. I might, for example, point out what worked well or some easy fixes. Finally, though, the observer reaches acceptance. And at that point, they are probably pretty well sold.

One thing that I like to tell observers wherever they are on their journey is that testing their stuff and incorporating the feedback is a real feather in their cap. I also go on to say that not everyone gets a chance to get their stuff tested, and that their willingness to do so really separates them from the average IA, ID, writer, graphic designer, or even coder.


One of my all-time faves

Tuesday, February 23, 2016

Just because no one complains doesn't mean all parachutes are perfect. (Benny Hill)

Does your organization make use of voice of the customer (VOC) systems? These typically combine traditional ways of getting customer feedback like surveys, give feedback links, and so on and display them all in one system. They usually try to automate everything as well, using things like textual analysis and machine learning. They also tend to feature lots of cool graphs, a snazzy dashboard, and all sorts of bells and whistles.

Personally, I think they’re wonderful. Hey, it’s all feedback, right? 

At the same time, though, I have run into a number of people over the years who seem to rely on this kind of feedback almost exclusively. Now, the various methods that make up a VOC system do have a number of strengths (numbers is always at the top), but they also have a number of drawbacks as well. 

So, what are some of those problems? I see three issues on the user’s end:

Knowing – First of all, the user needs to know that there’s a problem. In the lab, I often see users who think they have completed a task, but who actually have some steps remaining. I also sometimes see users complete another task by mistake, but be totally unaware that everything isn’t just peachy-keen. 

Another issue is work-arounds, a special problem for experienced users. They may be so used to doing things a certain way, they may not even be aware that their experience might have some issues, let alone complain about it.

A special issue for surveys is human memory. There is often a major time lapse between when a user has an experience and when they get surveyed. The chance of them remembering specific details can often be very low.

Articulating – Second, the user has to articulate the problem. Note this is not as trivial as it may seem. Believe me, I’ve been doing this for 30 years, and I still struggle figuring out exactly what went wrong in some instances during a test. Is this an example of poor cognitive modeling, Fitt’s Law, progressive disclosure, skeuomorphism? Now, imagine some grandma from Iowa trying to do something similar.

What you often get instead are very non-specific comments. Just as an example, it truly is amazing over the years how many times I’ve seen my particular favorite, “This sucks!” Not a lot of actionable information on that one, huh? (Just as an aside, one major strength of usability testing is that it allows follow-up to these vague responses.) 

Caring – Finally, the user has to care enough to share what they think with you. And that’s where those traditionally low response rates come from. In fact, would you believe that some companies are actually happy if they get a rate of 0.5%? Wow, how representative can that be?

So, who does fill out these things then? Well, there is typically a fair amount of self-selection going on. You might get haters, or fan-boys, or the terminally cranky and hard to satisfy, or the desirous to please. 

And that too is another benefit of testing. Though a test almost always involves small numbers, you do know what every one of those users thinks or would do – even if they would never respond to a survey or click on give feedback.

A final issue with caring is what I would call a threshold issue. In other words, with a VOC system, you’re probably going to get a lot of highs and lows. If, however, something was not a huge disaster – or, alternatively, a life-changing experience – it’s probably not worth reporting. 

In fact, you might well run into the death-by-a-1000-cuts syndrome. Just imagine several million users who have a couple of lower-level issues every time they log in to your system, but never enough to actually complain about. Now, imagine another similar system that comes along and doesn’t have any of those low-level issues. Imagine, further, that all those users leave you for that system overnight. What would you then have in hand that would give you any idea why that happened (or – even better – that something like that might be going to happen in the near future)?

On the opposite end of the spectrum, you can get something akin to Benny Hill's parachutes. In fact, one of my favorite clips of all time was when I was doing a test on a production system. At the end of a particularly trying task, a survey popped up. If I can remember correctly, the user said something along the lines of, "If they think I'm going to fill out their %#$@ survey after that @*#$% experience, they've got another &@^#$ thing coming."

In sum, VOC systems are wonderful, but they can involve their fair share of missed signals and false alarms. To make sure they are more than a glorified suggestion box, it can be helpful to triangulate their findings with other sources of data – web analytics, call center data, and ... even usability testing. 


Benny Hill, famous comedian. ladies man, and usability guru

Tuesday, February 2, 2016

Instructions must die. (Steve Krug)

I started out as a tech writer. I used to write manuals that would tell users how to use computer systems – hundred and hundreds of pages of instructions.

In fact, that’s how I got into usability.  The initial scenario would involve me going back to my developers and telling them that some particular feature took four of five pages to explain. 

“Could we make that a little simpler? What if we moved that bit to the next page, and got rid of this thing here? It might make more sense that way too, right?”

Over the years, developers started bringing me things to look at before I wrote them up. From there, it was a small step for asking for my input upfront, to letting me design a few things on my own – to even doing a little usability testing.

Now, that was a long, long time ago (we’re talking the ‘80s here, folks). It’s kind of strange how that instruction thing is still around though. 

Now, it’s been a long time since I saw – let alone worked on – a manual. What I’m talking about here, though, is something I often see in design or review meetings – basically, kind of a knee-jerk reaction to issues with fields or forms or pages to “just throw some instructions in there.” 

Now those instructions can appear at the top of the page, to the right of a field, under the field, in the field, in help, in FAQs, wherever … The real problem, however, is that nobody ever reads them.

And even if they do, they’re really just one more thing to add to the user’s cognitive load. Why can’t the action just be obvious on its own?  Why do we even need instructions? 

In Don’t Make Me Think, Steve Krug concentrates on boiling the instructions down. There are plenty of instances, though, where doing a little thinking can eliminate instructions altogether. 

My favorite example is probably the date or telephone number or Social Security number fields that won’t accept delimiters (you know, the / or -). Just strip ‘em out on the back end, and you can kiss those instructions goodbye.



Thursday, January 14, 2016

Happy talk must die. (Steve Krug)

Sometimes, I joke with my writers that they must get paid by the word.

Actually, for some of them, that’s not that far from the truth. A lot of writers for the web wander over from more traditional, print-based media – newspapers, magazines, PR … And, in those fields, writers do have to deal with something called a “word count.” In other words, there is a set amount of words they have to produce – even if they might not have all that much to say.

So, the first thing all these writers who are new to the web have to deal with is the fact that – I hate to break it to ya, fella, but – no one really wants to read your stuff. I’m sure that’s an incredible blow to the ego. It almost, though, seems like a rite of passage to have to come to terms with that fact.

There are two ways that last bit happens. Probably the most effective is to watch a few usability tests. That masterful opening paragraph that you spent hours on and are quite pleased with? Well, it looks like only 9 of 10 users actually read it. And, of the 9 who didn’t, did you hear their quotes? Did that one guy actually say, “Blah blah blah”? He did!

If that doesn’t work, I usually bring up all the research that shows that people really don’t like to read online, and why that is. My particular favorite is the seminal work that was done at the Nielsen Norman Group.

Nielsen Norman also, however, point to some real solutions as well. Now, these are simple things such as:
  • Using lists
  • Bolding keywords
  • Breaking up paragraphs

What’s really great about these methods is that they support the way most people read on the web, something known as “scan and skim.”

The main point I try to get across, though, is that those big, grey blobs that you were been rewarded for in previous lives? I hate to break it to ya, but they just ain’t going to work here. 


I’m pretty sure this is not what Steve was talking about
(Happy Talk is a song from South Pacific,
here covered by English Goth Punk band The Damned)

Monday, January 4, 2016

Things that behave differently should look different. (Alan Cooper)

There are two things going on here. First, there’s affordances. That’s just a fancy way of saying what a thing does should be obvious from the way it looks. An affordance is just the little thing that tells you what that thing can do.  

Think of a door. If it has a push plate, it tells you you have to push the door to open it. If, instead, it has something you can grab, that tells you you’re going to be pulling this one toward you. And if it has a fairly standard-looking handle, that means that you need to grab and turn it before you can do anything else.

In a more digital context, radio buttons say, “Click one of me.” Checkboxes say, “Click as many of me as you like.” Sliders say, “There’s more here.” Links say, “I’m going to take you to a new page.”

That last one is actually a good example of what Cooper is talking about here. A very traditional standard for links is blue and underlined. A lot of sites, however, get a little creative. They might, for example, ditch the underline, or use a different color, or both. In that situation, though, it’s much easier to be confused by the link’s affordance. A user might, for example, confuse a link with bolding, or a title, or nothing in particular, or what have you.

The other thing going on here is consistency. In fact, a typical corollary to this quote is, “Things that behave similarly should look similar.” 

Now, the whole point of consistency – and standards, which help deliver that consistency – is reducing cognitive load. In other words, don’t make me think! So, if users have already learned a particular affordance elsewhere – on the web, on your site, in life in general – they don’t need to learn something new. 

Just to make this concrete, I once tested a mobile app that had a lot of inconsistency. Interestingly, though, this was mainly an issue of location. For example, the Submit button was on the bottom right, the bottom left, the top right, and the top left. The app was also pretty inconsistent when it came to terminology. Submit, for example, might be “Submit,” or “Done,” or “Complete,” or just-see-your-thesaurus.

So, it’s really not just about affordances. There are actually all sorts of ways to be inconsistent.


And if you have a door like this,