Wednesday, July 13, 2016

A word is worth a thousand icons. (Jeff Atwood)

My variation on this is that “a word is worth a thousand pictures.” In context, I think it’s more clever. The other, though, gets a little more directly to the point.

And what would that point be? Well, it actually reminds me – the former writer and linguist, that is – of acronyms. We all know what acronyms are. We may not, however, be able to state exactly where they come from, what purpose they serve, and how and when they can be abused.

An acronym is really just a way to speak more efficiently. If, for example, you are in the national security industry, you’ll probably be talking a lot about the Central Intelligence Agency, and the National Security Agency, and maybe even the Komitet Gosudarstvennoy Bezopasnosti. How much more efficient to just say the CIA, the NSA, and the KGB.

Now, the problem arises when someone doesn’t know what those acronyms stand for. You, for example, probably know what an IA is, or SUS, or the K-J method, or MVP (and I don’t mean “most valuable player” here). Does your spouse though? Well, then how about your mom? Now, how about BABIP, VORP, DIPS, and WAR? Unless you’re a baseball nut like I am, those probably didn’t mean a darn thing.

And that’s the thing about icons. They act a lot like acronyms in that they allow you to communicate a lot of information in a very small space … unless they don’t, and then they don't really communicate anything, and fail miserably.

Now, some icons are pretty obvious. A printer for a print icon, for example, is something that pretty much everyone’s going to get. And there are also icons that, though they are not intuitively obvious, people have definitely learned over time. The perfect example is the floppy disk for saving. I mean, honestly, when’s the last time you used one of those? On the other hand, have you ever had any issues clicking on that to save something?

The problem arises when the icon is neither obvious nor learned. And that’s why I tell my project teams to add in a label, when they can. Of course, there are times when there isn’t room enough, especially on smartphones. You’d be surprised, though, how rarely that is actually the case, and how often you can indeed fit those words in.

A special issue arises when icons – and acronyms – are used not for efficiency’s sake, but for something much more nefarious. To return to acronyms for a second, those are famously misused to signal membership in a special club and to exclude others. How many times have you been in a business meeting, or talking with techies, and wondered what the heck they’re talking about? A similar thing happens with icons as well.

In particular, I’m sometimes struck by how readily graphic designers will resort to them. I’m also often struck by how coming up with icons seems to function more as an exercise in aesthetics than as an effort to really communicate with users. The icons that result are invariably “cool,” and “slick,” and “on brand” – and admired by other graphic designers. Often, though, the user may have no clue what they’re for.


Jeff is a developer and one of the founders of Stack Overflow.
It’s good to see that even developers get it too.

Tuesday, July 5, 2016

The fact that surprising results happen is exactly why we conduct actual research. (Jakob Nielsen)

I’ve been doing this for 30 years now. I’ve got over 3,000 users under my belt. I also just so happen to have been in the same industry for 20-some years. And I’m still surprised by at least something on every test I run.

Now, maybe that’s because I’m particularly dense, and haven’t been able to learn much over all those years and users. (And it is true that there are plenty of things that I am not surprised by – things that I see over and over.)

But if you think about it, there are a ton of reasons why even Jakob Nielsen would share my same sense of surprise. First, every test is different. Even if you’re doing a second round on the same stuff, you’ll at least find something new on those changes you made from the first round, right?

Second, technology is constantly changing. I actually started out testing mainframe green screens. These days, I’m testing voice-activated applications on smart phones. Who woulda thunk it?

Third, people change as well. Though I am a firm believer in another of Nielsen’s quotes, the one that states that “technology changes; people don’t,” I still have seen many changes over the years in users as well. In fact, I think it would be pretty darn interesting to take those users I tested on mainframes and plop them down in front of an iPhone and see what happens. Yes, the basic human hardware of brains and fingers and eyes has not – and will not – change, but users’ experiences and cultural contexts certainly have.

Most importantly, though, usability testing takes what may be the two most complicated things around – humans and computers – and throws them together. With that formula, there’s absolutely no telling what’s going to come up.

But that’s what makes it so fun. Honestly, the fact that tests uncover surprising results is why I’m still around.  If I wasn’t getting surprised and learning something new on each test I run, I probably would have quit a long time ago.


“If you don’t get any unexpected findings from usability testing, you didn’t run the study right” is another of Nielsen’s great quotes




More from Jakob:

Friday, July 1, 2016

A checklist does not an expert make. (David Travis)

Whatever you call it – usability evaluation, expert evaluation, heuristic evaluation – what it is is a particularly cost- and time-effective way to make some improvements to your system. If you think about it, there’s usually enough low-hanging fruit on any system that having someone who knows a little about usability take a quick look at it is never a bad idea. Personally, I’m a huge fan.

But, how exactly to go about doing one? Well, there are two basic approaches. One is to use a checklist; the other is to use an expert. Often, the first one is what you use when you don’t have the second one around.

Over the years, though, I’ve found that there are some definite issues with that first approach. In particular, I’ve found that checklists are either too vague … or too specific.

The vague ones – like the famous 10 from Jakob Nielsen and Rolf Molich – tend to have way too few guidelines. Though these kind of guidelines may apply to many situations, they are typically pitched at such a high level that anyone who is not an expert may have trouble applying them to the very specific situations that are involved in an actual eval. Using a checklist like this, a non-expert is very likely to get lost in the forest.

The specific ones, on the other hand, tend to have way too many guidelines. Though they can be very specific, that same granularity means that it can be very tedious to go through them all, applying them to each screen and interaction in a real system. They also may miss things that involve multiple elements or that occur at a higher level than pixels and code. Using a checklist like this, a non-expert is very likely to get lost in the trees.

So, a true expert review is usually my go-to for these things. Note that that usually means expert in usability and in the particular subject matter domain as well. And all that comes about from running tests, analyzing data, doing interviews and focus groups, and all the many things that separate someone who knows what they’re doing from someone who's just giving it a shot.  


David Travis is a consultant based in the UK
(and has some other great ideas about evals in this blog post)