Friday, June 12, 2020

Users do not care about what is inside the box, as long as the box does what they need done. (Jef Raskin)

Think of your car. If you’re like me, you know that you put the key in the ignition, you start the thing, and you drive somewhere. Oh, sure, there are some other details – turning signals, reverse, headlights, filling the thing up, taking it in to get the oil changed ... But that’s pretty much it.

I can’t remember the last time I opened the hood. Actually, I do!. But it was only because something was wrong. My battery had run out, and I had to get a AAA mechanic to come give it a charge. I opened the hood for him, then peeked inside. Yeah, I could make out a few items – the battery, the air filter (which I never change), maybe the windshield washer fluid, perhaps the radiator cap …

Honestly, though, I’m not a mechanic. I just care that the thing takes me where I want to go. 

Now, here’s the thing. As a user researcher, you may very well work with a bunch of mechanics. This goes without saying when it comes to the developers. I’m always kind of amazed, though, how interested designers, and business owners, and even content types are in technology in general and the inner workings of whatever they happen to be working on in particular.

If you think about it, though, they kind of have to. They really can’t help themselves. If they’re going to produce something, they have to care about the nuts and bolts, they have to get involved in the details.

User researchers, on the other hand, often stand a little apart from that. Our identification is primarily with the user. And the typical user is not the mechanic type, but really just someone who wants to buy a book, transfer some money, stream a video, reserve a plane ticket …

And what that all means when working with your team of “mechanics” is several things. First, there is the advocacy aspect. I find personas really work great here, but simply speaking up and getting teams back on track in general can be really useful. Another important thing to do is to occasionally bring up the idea of mental models – in particular, what they are, that users have them, and that the team’s mental models will tend to be a lot different than the users’. A final idea relates to pushback. I’m always a little amazed when I hear developers say that they can’t do a modal here, or that a table would be too much work there. I often think it’s a knee-jerk reaction, and the team really needs to orient less around what’s difficult for them and more around what’s good for the user.

For, in the end, your user base is going to be a ton more people who really just want to do the online equivalent of getting to work, or going shopping, or picking up the kids from soccer practice. Anything beyond that – or anything that gets in the way of that – may be of interest to a lot fewer users than the team might think.


Jef is mainly known as the guy who created the first Mac
(and, I guess, whatever weird thing that is on his head)

Thursday, June 11, 2020

If the standard is lousy, then develop another standard. (Edward Tufte)

Standards are good. Heck, standards are great!

They should not, however, be set in stone. The world changes, people change, technology changes. New data comes in, old theories are disproven, new ones arise. 

That something violated a standard should never be the end of discussion. I’ve run into plenty of instances where a standard might be out of date, or perhaps whoever put the standard together didn’t have all the information, or maybe the standard applies in 90% of the situations but definitely not in this particular one, or even that theory got in the way of actual data.

Indeed, that last point is a particularly important one. For me at least, these issues typically come up, not in some general philosophical discussion, but as the result of actual user research. So, if a test tells me that some standard got in the way of my users and their goals, I think it’s worth pointing out. Now, the reasons for the original standard may indeed still win out, but I definitely think it’s worth bringing up and talking about. And this is especially the case, if it comes up on not just one test, but multiple ones, or – heck! – one right after the other.

Case in point. My company likes to put FAQs on the right rail. In fact, we’ve got a standard for it! 

What I’ve been noticing lately, though, is that users tend to miss them when they’re over there. Those same users also point out that they typically expect to see them at the bottom of the page (and also that they really do value FAQs).

In other words, there already seems to be a standard out there. Perhaps we shouldn’t be reinventing the wheel here, folks. Perhaps we should take a look at that old standard and see if we might want to tweak it a little.

Honestly, what are the point of standards? Now, a lot of designers will point to efficiencies, saving time and effort, not endlessly hashing things over … as well as presenting a nice, clean look to the world. 

For me, though, it’s all about the user. Standards help users take a complex online world and make it a little more predictable. Whether internal (within your site) or external (with all the other sites out there), standards are crucial in making your design adhere to that classic user adage from Steve Krug, “Don’t make me think!” 


Tuesday, June 9, 2020

Data don’t generate theory – only researchers do that. (Henry Mintzberg)

Data doesn’t speak for itself. In fact, data can be sometimes used like a ventriloquist’s dummy – parroting whatever the ventriloquist wants to say. It takes someone with some real skill to get the data to really talk. And a lot of hard work as well.

Now, data alone will tell you that something did indeed happen. That, though, is kind of like learning that your car won’t go. Well, why won’t it go? What can you do to make it go again?

So, say the completion rate on your account opening process is below 10%. That’s good to know. Doesn’t sound too good either, does it? So, what would actually be your next step here?

Well, one thing would be to dig even further into the data. You might, for example, find out that certain users have less trouble than others. You might also find out that most people drop out on step x. Who knows, there might be even certain times of day when people are more or less successful. Honestly, though, your options will be limited. Web analytics are great for straight-up, high-level numbers. What those actually mean when you get down to it, however, can be another thing altogether.

Another possibility – and a very popular one I might be add – would be conjecture. Heck, that’s all A/B testing is when you really get down it.

A third idea, though, would be to get better data. And that’s why I’m always pressing for qualitative research. Usability tests, ethnography, in-depth interviews, even focus groups can get at some of those thorny why questions. 

In fact, good qualitative data, along with some serious analysis, will start to get at why’s that don’t apply just to the particular project you’re working on, but to multiple projects over time, and in multiple different situations. Throw in a little more explication, maybe a metaphor, and – and hey presto – you’ve got yourself a theory! 


Dang! – my kind of management consultant

Friday, June 5, 2020

“Data” is not the plural of “anecdote.” (Brian Clegg)

As a qualitative researcher, I have to be really careful here. On the one hand, I do not have the data that the web analytics folks and the A/B testing guys and the data scientists have. At the same time, though, I’ve got a lot more than just anecdotes.

First, I do have some numbers. No, they’re not thousands and thousands, or even hundreds and hundreds, but they are more than your own personal data point, or what you heard from a friend or family member or your hair stylist. 

Second, what I have is indeed data. In particular, it is not just opinion. Now, that’s not to say I don’t have respect for your opinion. It might, indeed, be a very well-informed opinion. But it’s still an opinion.

What I can bring to the table is actual behavior, as well as verbalizations of thought processes. And with these coming from real, live users, trying to complete genuine, honest-to-goodness tasks. So, very different from some management types taking a gander at a screen in a conference room and wondering if the “green pops enough” or thinking that “the content doesn’t really speak to me.”

Third, the data that I have tends to be very rich. And that is, indeed, why I feel I don’t necessarily need hundreds and hundreds of data points. 

Now, if what I was trying to get at was more marketing related (How many people would be interested in this product? How much are they willing to pay? Which of these ads are people more likely to click on?), I’d be all about the numbers. What I’m dealing with, though, is whether something’s going to fly or not. 

And you don’t need a lot of numbers for that. You simply don’t need a ton of people to tell you that they have no clue what to put in field X, or that that description makes no sense, or that the CTA is impossible to find – when they actually show and tell you that themselves. If you didn’t have those rich verbalizations though, you might never figure out why you’re getting such poor data, or why no seems to get beyond screen X, or why people seem to be signing up for the wrong product.

Sure, you can speculate on why these things are happening and what the numbers might actually mean. Usability testing, though, will get you the real answers, giving you that very important why along with the how much.


Brian Clegg is an English writer specializing in explaining abstruse science to lay folks