Thursday, January 14, 2016

Happy talk must die. (Steve Krug)

Sometimes, I joke with my writers that they must get paid by the word.

Actually, for some of them, that’s not that far from the truth. A lot of writers for the web wander over from more traditional, print-based media – newspapers, magazines, PR … And, in those fields, writers do have to deal with something called a “word count.” In other words, there is a set amount of words they have to produce – even if they might not have all that much to say.

So, the first thing all these writers who are new to the web have to deal with is the fact that – I hate to break it to ya, fella, but – no one really wants to read your stuff. I’m sure that’s an incredible blow to the ego. It almost, though, seems like a rite of passage to have to come to terms with that fact.

There are two ways that last bit happens. Probably the most effective is to watch a few usability tests. That masterful opening paragraph that you spent hours on and are quite pleased with? Well, it looks like only 9 of 10 users actually read it. And, of the 9 who didn’t, did you hear their quotes? Did that one guy actually say, “Blah blah blah”? He did!

If that doesn’t work, I usually bring up all the research that shows that people really don’t like to read online, and why that is. My particular favorite is the seminal work that was done at the Nielsen Norman Group.

Nielsen Norman also, however, point to some real solutions as well. Now, these are simple things such as:
  • Using lists
  • Bolding keywords
  • Breaking up paragraphs

What’s really great about these methods is that they support the way most people read on the web, something known as “scan and skim.”

The main point I try to get across, though, is that those big, grey blobs that you were been rewarded for in previous lives? I hate to break it to ya, but they just ain’t going to work here. 


I’m pretty sure this is not what Steve was talking about
(Happy Talk is a song from South Pacific,
here covered by English Goth Punk band The Damned)

Monday, January 4, 2016

Things that behave differently should look different. (Alan Cooper)

There are two things going on here. First, there’s affordances. That’s just a fancy way of saying what a thing does should be obvious from the way it looks. An affordance is just the little thing that tells you what that thing can do.  

Think of a door. If it has a push plate, it tells you you have to push the door to open it. If, instead, it has something you can grab, that tells you you’re going to be pulling this one toward you. And if it has a fairly standard-looking handle, that means that you need to grab and turn it before you can do anything else.

In a more digital context, radio buttons say, “Click one of me.” Checkboxes say, “Click as many of me as you like.” Sliders say, “There’s more here.” Links say, “I’m going to take you to a new page.”

That last one is actually a good example of what Cooper is talking about here. A very traditional standard for links is blue and underlined. A lot of sites, however, get a little creative. They might, for example, ditch the underline, or use a different color, or both. In that situation, though, it’s much easier to be confused by the link’s affordance. A user might, for example, confuse a link with bolding, or a title, or nothing in particular, or what have you.

The other thing going on here is consistency. In fact, a typical corollary to this quote is, “Things that behave similarly should look similar.” 

Now, the whole point of consistency – and standards, which help deliver that consistency – is reducing cognitive load. In other words, don’t make me think! So, if users have already learned a particular affordance elsewhere – on the web, on your site, in life in general – they don’t need to learn something new. 

Just to make this concrete, I once tested a mobile app that had a lot of inconsistency. Interestingly, though, this was mainly an issue of location. For example, the Submit button was on the bottom right, the bottom left, the top right, and the top left. The app was also pretty inconsistent when it came to terminology. Submit, for example, might be “Submit,” or “Done,” or “Complete,” or just-see-your-thesaurus.

So, it’s really not just about affordances. There are actually all sorts of ways to be inconsistent.


And if you have a door like this,

Wednesday, December 9, 2015

Fall in love with the problem, not the solution. (???)

I’ve actually seen this one in so many places, and attributed to so many people, that I’m a little leery of ascribing it to any one person. 

But what does it mean? Well, one thing that I see a lot these days is teams that are trying desperately to be innovative. Oftentimes, what this turns into is something along the lines of being innovative just for innovation’s sake. They might, for example, focus solely on some piece of technology (the Apple Watch, say), or some new style (e.g., Flat Design), or some new function (multi-touch gestures?), or some new method (gamification, anyone?). Whether that actually does something for their users, whether that actually solves a real user problem, seems to kind of get lost in the shuffle.

What these designers don’t realize is that, one, users really don’t care. Users typically just want to get their task done. If that involves all sorts of wild and crazy stuff, fine. If it simply involves boring things like tabs and radio buttons, well, they’re fine with that too.

What these designers also don’t realize is that, if they would only focus on the user’s actual problem, they might end up being very innovative indeed. In fact, a typical follow-up to the above quote is “… and the rest will follow.” What designers really need to understand is that all that cool stuff that they often fall in love with is simply a means to an end. 

So, how to identify, and focus on, those user problems? Well, I’ve always been a big fan of ethnographic research (also known as field studies). This method looks at users in their own context (the office, a retail store, their car, the couch at home), doing their own tasks, with their own goals in mind. That way, you can identify what’s working, what’s not working, the pain points, the gaps (and that involves the user’s whole experience, not just their interaction with computers, by the way). Next, all you need to do is sit down and analyze all the data that results (good old-fashioned affinity diagramming is my favorite way to do this). You can then brainstorm – and innovate – ‘til the cows come home.


Though I really couldn’t find a source for this quote,
a lot out there seems to point to these guys
(I'm not surprised)

Thursday, December 3, 2015

The point of testing is not to prove or disprove something. It’s to inform your judgement. (Steve Krug)

Unfortunately, there are a lot of people out there who want that proof. And the way they typically look for it is through numbers.

So, first, you’ve got your academics. They will be “purists,” and will insist on statistical significance and p-values and stuff like that. Next, you’ve got your marketing types. They’re into stats too. Finally, you’ve got your business folks. Once again, numbers types. 

So, the first thing I have to do is share that I’m actually not going to be giving anyone any real numbers (or at least not the kind of numbers they’ll be expecting). Then, I have to convince them that that’s not necessarily a bad thing. Finally, I have to break it to them that, yes, they will actually have to make some tough decisions (but much less tough than if they had nothing else to go on).

In accomplishing all that, the first thing I talk about is that usability testing necessarily means qualitative data. Now, these folks typically have some familiarity with that – e.g., through focus groups – so I always make sure to reference those. From there, I then go on to talk about trading numbers for richness. In particular, I like to point out that one great thing about a usability test is that you don’t have to guess, or impute, the reasons behind user behavior. Users will tell you exactly why they didn’t click on that button, why they preferred design A to design B, why they abandoned their shopping cart … And that can be pretty valuable stuff in coming up with buttons that they will click on, or designs that they will want to use, and shopping carts that they won’t abandon ...

Another thing that I point out is that usability testing focuses less on user preference, and more on whether something works or not. Note, though, that this does not mean QA. A system can be completely bug-less but still be impossible to use. Misnaming things, putting them in the wrong menus, burying them in footers, and so on can be just as effective in stopping a user in their tracks as a 404 page.

(And, yes, you really do need numbers for preference issues. Think of what goes on into deciding whether a feature should be added to your software. How many people would want it? How many of your main user base? How badly? Usability testing really should come after that decision, and focus on whether users can actually use the feature.)

Finally, though, I simply state that I am not calling the shots here. All I am doing is providing information. Executives may have very good business reasons behind somewhat dicey design decisions. All I want to do is make sure they know all the implications of those decisions. And what I’ve often found is that executives may not even be aware that those design decisions may result in a somewhat dicey user experience, or how dicey that experience may be. But after doing testing, well, they really don't have any excuses now, do they?

Steve’s alter ego 

Friday, November 20, 2015

Every new feature comes with a cost. (Shlomo Benartzi)

One of my favorite bugaboos is something I like to call “additive design.” This is where your original design – whether for a website, an app, some software, or whatever – starts out nice and clean and simple.  Gradually, however, things get added to it, and added to it, and added to it … until it falls apart.

Now, this can happen after rollout … or before. For the former, this is just a typical evolution over time. There are new features, or new content, or new audiences, and those need to be addressed somehow.  Unless the possibility of additions like these were considered beforehand – i.e., unless you sought to make your design scalable from the get-go – these things will tend to just get tacked on, oftentimes rather willy-nilly.

Something similar can also happen before the product even sees the light of day as well. In this case, though, it’s usually a failure on the design team to prioritize, or to push back, or to keep the big picture in mind.

Now, what really gets me about this sort of thing is the fact that only rarely do people ever even bat an eye when it come to this stuff. In particular, no one ever seems to wonder if, by adding whatever-it-happens-to-be, there would be any particular effect on what’s already out there, or on the user’s overall experience.

And it’s not just the gestalt of the thing. As an example, edge cases often come up in reviews (from legal, risk, compliance, IT, QA …). Now, that’s a topic in itself, but what the typical solution is is often to add some help, or put in an extra radio button, or throw in a link, or add another option to the menu.

And what that can lead to is a screen that was formerly nice and simple and easy to process becoming one that is just a big jumble. I realize the team’s heart is in the right place, but they just don’t realize that all that “help” might come as a cost. Perfect example of unintended consequences.

To be a little concrete here … I once worked on a project team that took a simple date field and turned it into 2 date fields, some instructions, and a help link. And all of that stuff was simply to address what might happen – i.e., edge cases. Sigh …

As usual, Nielsen Norman does a much better job of explaining this than I ever could. Check out their analysis of those new-fangled soda machines right here.


Shlomo's not really a usability guru, but does seem to have developed some appreciation for the field in his book The Smarter Screen

Wednesday, October 14, 2015

Your site is not an island. You need to fit in with the rest of the web. (Keith Instone)

I am a firm believer in standards. My arguments in favor of them are typically two:
  1. Somebody else might have thought about this before
  2. Users might actually have gotten used to doing things a certain way

As a result, one of the things I love to do on any project is a quick competitive eval. I just go out and see what all our competitors are doing. This gives me a good sense of:
  • Whether there are any standards
  • How strong they are
  • Whether it would make any sense to break them

I’m always amazed, though, at how many people simply can’t wait to break standards. They usually cite innovation, and creative disruption, and whatever buzzword happens to be current.

Actually, I’ve come to the realization, over the years, that this may have more to do with personality than anything. If you’re familiar with Myers-Briggs, I’m a (weak) S. S’s tend to be more practical, down-to-earth, and data-driven. The people who I butt heads with tend to be (strong) N’s. They tend to be more abstract, intuitive, and full of ideas. 

I typically handle my N colleagues by getting them to:
  • Focus on higher-level issues, and less on details. For example, radio buttons are a pretty darn good way to tell a user that he needs to make just a single-choice. There’s really no need to reinvent this particular wheel. Developing a wizard to help a user pick the product that’s right for them? Now, there’s something that might actually add some real value.
  • Solve real problems rather than simply coming up with random new ideas. A colleague of mine likes to make the distinction between innovation (the former) and mere invention (the latter). 

By the by, before you can solve real problems, you have to identify them. And I’ve always been a big fan of ethnography when it comes to that.

Keith may be most famous for his work in getting going
the IA Summit, UXnet, and World Usability Day
(oh, and those glasses)

Thursday, October 1, 2015

One swallow does not a summer make. (Aesop)

This one is actually posted in the observation room of my usability lab. It’s a subtle, tasteful reminder that you might want to consider coming to more than just that one usability test, Mr. or Ms. Team Member. 

Yes, I know you’re busy. Yes, I realize that usability tests are a little bit like baseball games – lots of boredom punctuated by rare, brief flurries of excitement that are easy to miss.

You do realize, though, that that one user you saw may not be totally representative of all 10 I will be bringing in this week, right? In other words, if you happen to be at the very first session, there’s really no need to completely redesign the system (or get all defensive or jump off the roof) before the 10:30.

And when it’s time for the report out, do please be a little circumspect when you’re tempted to talk for 10 minutes about the one user who had that one problem that – hate to break this to you – nobody else actually had. The rest of us did see that person, saw a whole bunch of other people as well, and determined that that original user might just be an outlier.

Yup, that’s right. The rest of the team actually attended most of the tests. And, as a matter of fact, I personally happened to attend all of them. In addition, I was paying attention the whole time as well. Finally, I spent probably twice as much time reviewing my notes, looking at the tapes, trying to figure out what it all meant, and putting it all in a form that you could easily digest.

Now, don’t get me wrong. I’m really happy you could make that particular session. And, no, I am not taking attendance. It’s just that the more users you see, the more you’ll understand and the more refined your subsequent judgments will be. No, you don’t have to attend every darn one. Heck, 3 or 4 might be enough to give you a good idea whether what you’re seeing is representative or not. 

(By the way, I actually have not found this to be a real problem for the actual members of the project team. Interaction designers, information architects, writers and even graphic designers are usually there for the duration. It’s often the managerial or business types who are guilty here. And that’s okay. In general, if these types take my report seriously and act on the findings, it’s not totally essential that they see it with their own eyes.)




And, no, Aesop was not blind. I understand 
the sculptor just had trouble “doing eyes.”