Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

What really happens in the usability lab?

Jeff Sauro • October 27, 2010

There are many great books (some classics) on conducting usability tests.

These books provide the blueprint for conducting an ideal usability test. One common theme these books present is that when conducting a test you are to act, as best as possible, as a neutral observer.

Don't lead users, don't put words in their mouth and don't just tests as a formality to confirm your preconceived ideas about the product.

But what do usability evaluators actually do in the lab?

A somewhat recent analysis by Norgaard & Hornbæk[pdf] provides some insight on what goes on behind the one-way mirror. 

This analysis is different than the famous Comparative Usability Evaluation studies. The CUE studies largely focused on the results of usability tests and found evaluators tend to find different sets of problems (especially when the tasks and methods differ). While disconerceting, it turns out many professions that rely on expert judgment have high variability.

14 Usability Test Sessions Analyzed

The analysis reviewed audio and video from 14 different usability testing sessions from seven different companies in Denmark.  Around half the labs were for in-house usability teams (IT or product development) and the others were from consulting firms who are hired to conduct usability tests.

Of the 14 usability sessions analyzed:

  • 8 contained examples of evaluators confirming suspected usability problems from preconceived opinions: "Now I am just looking for ammunition"
  • 13 of the facilitators asked leading questions: "Did you notice this column" or "Can you do this task another way?"
  • 10 asked questions about product utility but utility issues were almost always presented as less important than usability issues
  • 8 encountered technical problems including system crashes
  • 0 carried out any structured problem reviews immediately after a test session
  • 13 asked about expectations: "What would you expect to happen if you clicked on this link?"

Usability Testing as Experimental Research

It is called a usability laboratory so it's no surprise that many usability professional consider themselves unbiased observers carrying out controlled experiments.  This analysis makes clear that the fast-paced results oriented realities of product development mean practicality can rule over experimental rigor. 

I recently asked one of the living legends of usability Joe Dumas and editor of the Journal of Usability Studies how he interprets these results. His response was insightful.

UX professionals are supposed to be able to put their own expectations aside and remain neutral. This is what we criticize developers as not being able to do. Perhaps we are not above letting our own expectations influence how we interact and what we select to report.

So we're far from perfect and like many professions you should listen to what we say, not what we do.

Some Practical Advice for Your Next Usability Test

Just like the CUE studies, this analysis brought up some good practical advice.  So after you're done asking leading questions in your next usability test, consider this practical advice:

  • System Failure: Expect your prototype to break or application to crash and have a back-up plan of what to test.
  • Create a Top 10 Observation List: Immediately after a testing session ends, generate a list of the top 5 or10 observations or notes when they are fresh in mind. Plan on adding say 10 minutes after each user to include this.
  • Don't neglect utility: You're conducting a usability test to find and fix problems but this might be the only time developers or product managers get candid feedback on missing features or a mismatch with the user's workflow.




About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More



You Might Also Be Interested In:

Related Topics

Usability Testing, Usability, Usability Metrics
.

Posted Comments

There are 5 Comments

November 2, 2010 | Jeff Sauro wrote:

Justin,

I don't think there's necessarily anything wrong with asking users what they expect to see as I do it myself. At the very least I wanted to point out that asking about expectations is very common (no judgment on whether this is right or wrong).

One potential problem that I think the authors of the paper were tapping into is using user expectations as a source of a usability problem while at the same time holding tests to a high laboratory standard. Some people would likely object to treating problems found from what a users says they expect to see and whether a user actually encountered a problem, even though both are helpful.

I find what users say they'd expect to see as valuable and most developers also agree and I don't discourage it. So like confirming problems, asking about expectations is a common practice which isn't bad. It just means that the usability test is probably much more about informing design than lab experiment. 


November 2, 2010 | Justin wrote:

Hi Jeff, I'd love if you could clarify why asking about expectations is bad. 


October 27, 2010 | Jeff Sauro wrote:

Great points. I've done all the things mentioned as well and like to think that I'm balancing at least partial objectivity with the need to get data. As usability engineers were often looked at as the voice of the user. People want to know what users think, so we ask them, even if some of the questions might generate deceiving answers.

I also don't think there's anything wrong with confirming problems so long as we're not convincing ourselves or others that we're conducting work as a passive observer. While the typical usability test is far from a controlled experiment, many of the methods like think-aloud come from a rich history of cognitive science and educational theory.

So usability engineering, like any engineering should be informed by science, even if the practice is less than scientific. 


October 27, 2010 | jrb wrote:

I totally agree. I ask many of the questions above during (or after) studies Most studies I do are during the planning phase, and I usually need to find out what expectations are -- for workflow or technology. I want to validate that what the product team thinks matches up with the users needs.

If a user completes a scenario, but doesn't click on or mention a widget that we are interested in I would ask if they'd noticed it to get more feedback - why they didn't use it, whether it needs different placement or labelling, can it just be eliminated, etc.

Also - is confirming a suspected usability problem a bad thing?

By the way - I definitely don't consider myself to be conducting a controlled experiment during a study - I'm a usability *engineer* and quite like it that way! :) I think we do a disservice if we try to convince ourselves, or those we work with, that we're doing hard science. How many usability professionals regularly hypothesis test in day to day work? 


October 27, 2010 | anonymous wrote:

What you do not mention is that those questions may be acceptable if they do not influence the user's performance. For instance, after a user has already missed the critical step in a task, thinking he/she has completed it... Essentially having the user point out why he missed it... Maybe the item was not missed or overlooked, but was misunderstood. Without asking follow-up questions, the researcher would guess. Of course, one should not ask the question if the follow-on task is related or would be influenced.
So maybe a follow-on blog should discuss "acceptable questions and when to ask them ".
Thanks  


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 2 + 3: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[4236 Subscribers]

Connect With Us

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.