Tuesday, November 29, 2016

A user interface is like a joke. If you have to explain it, it’s not that good. (unknown)

I’ve been doing this UX thing for about 30 years now.  In the beginning, there was a lot of explaining.  Believe me, green-screen, mainframe systems needed it badly. Most of them came with a veritable library of manuals. In fact, that’s how I got my start in this business – writing paper manuals.

My next big thing, though, was online help. Now, that was a real step up from paper manuals. At the same time, though, I found there were two different styles of help – one that was really helpful and one that was basically just throwing a manual online. The helpful version made help contextual – for example, help for a particular field that was right next to that particular field, or help for a particular page that actually appeared on that page. The unhelpful version took the user to a totally separate system, where they had to browse through what was basically a table of contents, or to scan what was basically an index. 

As the industry matured and matured, moving from software to websites to apps, I noticed that things seemed to be getting simpler and simpler – and less and less in need of explanation. It appeared that we were finally and truly moving to that holy grail of “intuitive obviousness.” 

Recently, though, I’ve noticed things taking a step back, returning to the bad old days of explanation. What I’m talking about here specifically are things, which usually appear in apps, called “coachmarks.” They’re those little boxes that might appear when you first log in, with little arrows pointing to and explaining not-necessarily-totally-obvious features. 

Now, there are some good reasons for these. For one, small screens simply have a lot less space. And that means that there might not be enough room to spell everything out. We can hope that users will explore a little, but we can’t always count on it. So why not help out by exposing a few things, right?

There are, however, also some bad reasons. For example, some apps might be trying to do too many things, and simply need to be scaled back. Some might also have too heavy an emphasis on graphic design. Left to their own devices, I’ve noticed that graphic designers sometimes opt for the “cool” and “slick” over the explicit and obvious. “Affordance” probably needs to be more of a part of those designers’ vocabularies. 

This is especially a problem when design that works for the small screen is ported – pretty much without any translation – to larger and larger screens. For example, why use a “hamburger” menu when you’ve got plenty of room to actually spell it out? As Luke W (or was it Nielsen Norman?) pointed out, “It’s mobile first, but not mobile only.” But that’s a great topic for a whole other post.


It's like a mini manual!

Thursday, October 6, 2016

To err is human. To really foul things up, you need a computer. (Paul Ehrlich?)

This one dates all the way back to the 60s. I can picture it hanging on somebody’s cubicle – maybe somebody with a white short-sleeve shirt; tie; pocket protector; dark, chunky glasses; and a lame attempt at sideburns.

Sadly, it’s still pretty accurate. Yes, we’ve come a long way [baby], but there are a number of factors that probably mean that this one will be with us always.

First of all, humans are still a lot better at certain things than computers. Now, a computer might be quite good at searching a huge database, or crunching numbers, or analyzing chess moves. Humans, though, are equally good at nuance, and vagueness, and emotion, and a different kind of complexity. Yes, with AI and machine learning, this is rapidly changing, but when it comes to the Turing Test, my money is still on the human.

Second, we need to remind ourselves that humans are still largely in charge. And what that means is that they still do the coding, and the design, and the requirements, and the QA – and basically put the darn things together. So, there’s still plenty of room for these error-prone humans to leave things out, to add in bad things, and to generally screw up the interaction between human and computer. It’s how our field got started after all. And – once again – I don’t see this going anyway anytime soon.

Finally, it seems to me that computers may also have finally passed a certain threshold. To me at least, they appear to be too complex for us mere humans to predict what they will do, how we should interact with them, what can go wrong, and how to fix them. Theoretically, we can get to the bottom of it all, but there are typically so many things in play, that it may take a lot of effort – perhaps even an infinite amount of effort – to really figure it all out.

There’s actually a book out there that speaks very specifically to this.  It’s called Overcomplicated, and is by a Silicon-Valley-type named Samuel Arbesman. I haven’t gotten to it yet, but it’s #1 on my list. It promises to “offer a fresh, insightful field guide to living with complex technologies that defy human comprehension.” Can’t wait!


Love the sideburns!

Friday, September 2, 2016

Usability bugs don't crash the system; they crash the user. (Jurek Kirakowski)

I was lucky enough to get in on the ground floor in my profession. My grad school, Carnegie Mellon, can make a legitimate claim to having invented the think-aloud method. My first company out of school, Digital, was one of the first companies to really take usability seriously. It’s where Jared Spool, Steve Krug, Karen Holtzblatt, Dennis Wixon, and Chauncey Wilson all got their starts.

Unfortunately, Digital also famously bet against the PC. So, after five wonderful years there, I found myself in a wonderful city (Charlotte, NC), but one that was not exactly Boston, or Austin, or San Francisco when it came furthering my career. Wanting to stay, I took jobs as a tech writer and instructional designer, but was always on the lookout for my next usability gig.

Sadly, that involved many interviews where I had to explain exactly what usability was. In particular, I had to explain how it wasn’t QA. That was what most people were familiar with, so I typically had to tie the two together somehow to explain what it was I actually did. 

I usually made the point that QA usually looks for things that “crash the system.” Usability, on the other hand, finds things that don’t necessarily crash the system, but might as well have. In other words, if the user can’t find your link, or doesn’t understand what to put in a field, or never goes to the proper menu item because it’s worded wrong, the effect is the same – the user can’t complete their task; ergo, the system failed.

Unfortunately, having to offer this explanation was also a signal to me that this place might not be able to best use my skills. And what was particularly frustrating was when the job description included the word “usability” in it – without HR, or the hiring manager, or whoever actually knowing what they were really talking about.

Yes, this story does have a happy ending. I finally found someone who did know what a usability engineer was – and also needed one desperately. Happily, they were also one of the biggest banks in the country, and I started their usability practice. Interestingly, though, they also no longer exist either. Hmm, you don't think it’s me, do you?


Jurek Kirakowski - professor, author, father of SUMI, and fashion icon

Tuesday, August 16, 2016

We'd rather people are bored than confused. (Joe Lanman)

And that’s probably what makes usability engineers different from marketeers, or venture capitalists, or hot-shot designers, or budding entrepreneurs …

There is an incredible attraction to the bright, shiny object; to sizzle; to wowing and delighting your users; to making a splash in the marketplace. And there is definitely a place for that.

It is, however, a pretty high-stakes game. If you do it right, you might indeed achieve all those goals. If not, though, you might well fall flat on your face.

Now, here’s the rub … Not every user wants to be delighted or wowed, especially when they are simply trying to complete some basic task – buying something, looking up information, making a reservation, getting the balance on their bank account. Usually, they just want to get that task done, and without too much effort.

In that regard, boring can be a pretty good bet. Perhaps your interface doesn’t really need all those gizmos and gadgets and cool design trends you saw on those apps you and your friends were sharing the other day. 

Here’s the question you need to ask yourself … Are my innovations helping the user complete their task, or are they simply getting in the way?

Two great ways to accomplish the former are 1) to give the user functionality they never had before, and 2) to make your UI as clear and simple as possible. Examples of the former abound – Uber, eBay, Amazon, Venmo, Tinder … Examples of the latter are not as obvious, but there are still plenty out there (Google is always my favorite). In fact, a lot of real winners manage to do both at the same time.

On the other hand, one great way to get in the user’s way is to design your site, app, whatever around those gizmos and gadgets and cool design trends just because you think they’re innovative in themselves. They’re not. True innovation comes from solving user problems and then just simply getting out of the way.



Joe is an interaction designer for the UK Government

Monday, August 8, 2016

The test must go on. (Laura Klein)

What an incredible production a test can be. You’ve got to recruit the users, make sure the prototype is working, set up the meeting invites, get your test plan approved, reserve the lab, set up the technology, run your pilot, put your material together, get some water, get those gift cards ... 

It’s a lot like putting on a play. And, like a play, when the people start showing up (observers and users, in our case), there’s absolutely no backing out.

(Even when I’ve actually had my project team pull out [because of major switches in strategy, prototypes that just won’t work], I’ve still been able to get something out of it. Typically, I’ll turn it into an interview, or maybe a test of what some competitors offer, or maybe a test of something totally different. But with all the users recruited and paid for, you really need to do something.)

So, with all this complexity, it’s inevitable that something will go wrong. I don’t think I’ve ever had a prototype that hasn’t had some glitch in it. Heck, I’ve even had production systems go down on me. As for users, there will always be no-shows, or poor recruits, or late arrivals, or the ones who just won’t talk. On the technology side, cameras sometimes don’t work, feeds go black, and recording software crashes. And all that’s not even taking into consideration user error – i.e., the poor facilitator who’s trying to do a million things at once. 

The important thing to realize, though, is that every test is going to have some issue. At the same time, however, you will still get tons of great data. Now, some of that data might have to be thrown out, or some of it might have to be taken with a grain of salt, but it is truly amazing how much even the most imperfect of tests will give you.

The real challenge often is getting your observers to understand all this. And, sometimes, that starts right off the bat. One thing that I like to tell them is that “I’ll test whatever you can get me” and that “I can guarantee we’ll get something out of it.” Overall, though, my goal is to get them to relax, let the test happen, and concentrate on the results. 


Laura is the president of the wonderfully named Users Know, as well as the author of UX for Lean Startups and Build Better Products

Wednesday, July 13, 2016

A word is worth a thousand icons. (Jeff Atwood)

My variation on this is that “a word is worth a thousand pictures.” In context, I think it’s more clever. The other, though, gets a little more directly to the point.

And what would that point be? Well, it actually reminds me – the former writer and linguist, that is – of acronyms. We all know what acronyms are. We may not, however, be able to state exactly where they come from, what purpose they serve, and how and when they can be abused.

An acronym is really just a way to speak more efficiently. If, for example, you are in the national security industry, you’ll probably be talking a lot about the Central Intelligence Agency, and the National Security Agency, and maybe even the Komitet Gosudarstvennoy Bezopasnosti. How much more efficient to just say the CIA, the NSA, and the KGB.

Now, the problem arises when someone doesn’t know what those acronyms stand for. You, for example, probably know what an IA is, or SUS, or the K-J method, or MVP (and I don’t mean “most valuable player” here). Does your spouse though? Well, then how about your mom? Now, how about BABIP, VORP, DIPS, and WAR? Unless you’re a baseball nut like I am, those probably didn’t mean a darn thing.

And that’s the thing about icons. They act a lot like acronyms in that they allow you to communicate a lot of information in a very small space … unless they don’t, and then they don't really communicate anything, and fail miserably.

Now, some icons are pretty obvious. A printer for a print icon, for example, is something that pretty much everyone’s going to get. And there are also icons that, though they are not intuitively obvious, people have definitely learned over time. The perfect example is the floppy disk for saving. I mean, honestly, when’s the last time you used one of those? On the other hand, have you ever had any issues clicking on that to save something?

The problem arises when the icon is neither obvious nor learned. And that’s why I tell my project teams to add in a label, when they can. Of course, there are times when there isn’t room enough, especially on smartphones. You’d be surprised, though, how rarely that is actually the case, and how often you can indeed fit those words in.

A special issue arises when icons – and acronyms – are used not for efficiency’s sake, but for something much more nefarious. To return to acronyms for a second, those are famously misused to signal membership in a special club and to exclude others. How many times have you been in a business meeting, or talking with techies, and wondered what the heck they’re talking about? A similar thing happens with icons as well.

In particular, I’m sometimes struck by how readily graphic designers will resort to them. I’m also often struck by how coming up with icons seems to function more as an exercise in aesthetics than as an effort to really communicate with users. The icons that result are invariably “cool,” and “slick,” and “on brand” – and admired by other graphic designers. Often, though, the user may have no clue what they’re for.


Jeff is a developer and one of the founders of Stack Overflow.
It’s good to see that even developers get it too.

Tuesday, July 5, 2016

The fact that surprising results happen is exactly why we conduct actual research. (Jakob Nielsen)

I’ve been doing this for 30 years now. I’ve got over 3,000 users under my belt. I also just so happen to have been in the same industry for 20-some years. And I’m still surprised by at least something on every test I run.

Now, maybe that’s because I’m particularly dense, and haven’t been able to learn much over all those years and users. (And it is true that there are plenty of things that I am not surprised by – things that I see over and over.)

But if you think about it, there are a ton of reasons why even Jakob Nielsen would share my same sense of surprise. First, every test is different. Even if you’re doing a second round on the same stuff, you’ll at least find something new on those changes you made from the first round, right?

Second, technology is constantly changing. I actually started out testing mainframe green screens. These days, I’m testing voice-activated applications on smart phones. Who woulda thunk it?

Third, people change as well. Though I am a firm believer in another of Nielsen’s quotes, the one that states that “technology changes; people don’t,” I still have seen many changes over the years in users as well. In fact, I think it would be pretty darn interesting to take those users I tested on mainframes and plop them down in front of an iPhone and see what happens. Yes, the basic human hardware of brains and fingers and eyes has not – and will not – change, but users’ experiences and cultural contexts certainly have.

Most importantly, though, usability testing takes what may be the two most complicated things around – humans and computers – and throws them together. With that formula, there’s absolutely no telling what’s going to come up.

But that’s what makes it so fun. Honestly, the fact that tests uncover surprising results is why I’m still around.  If I wasn’t getting surprised and learning something new on each test I run, I probably would have quit a long time ago.


“If you don’t get any unexpected findings from usability testing, you didn’t run the study right” is another of Nielsen’s great quotes




More from Jakob: