Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

Browse Content by Topic

Usability Testing (40)

Usability (28)

Sample Size (23)

UX (20)

Survey (18)

Statistics (17)

Usability Problems (15)

Methods (14)

SUS (14)

Net Promoter Score (14)

Questionnaires (11)

Task Time (11)

Usability Metrics (11)

User Research (9)

Rating Scale (9)

Card Sorting (8)

Problem Discovery (7)

Remote Usability Testing (7)

Customer Experience (7)

Satisfaction (7)

Information Architecture (7)

Benchmarking (6)

Mobile Usability Testing (5)

Heuristic Evaluation (5)

Task Completion (5)

SUPR-Q (5)

Confidence Intervals (5)

Lean UX (4)

SEQ (4)

Tree Testing (4)

Qualitative (4)

Findability (4)

Six Sigma (4)

A/B Testing (3)

Analytics (3)

Summative (3)

Loyalty (3)

Cognitive Walkthrough (3)

Confidence (3)

Credibility (3)

Formative Testing (3)

Regression Analysis (3)

PhD (2)

Errors (2)

Keystroke Level Modeling (2)

Problem Severity (2)

SUM (2)

Quality Assurance (2)

Margin of Error (2)

Persona (2)

Task Failure (2)

Sampling (2)

Crowdsourcing (2)

Completion Rate (2)

UI Disasters (2)

ROI (2)

Clicks (2)

Geometric Mean (1)

Key Drivers (1)

Median (1)

Evidence Based Design (1)

iPhone (1)

Correlation (1)

Reliability (1)

Think Aloud (1)

Post-Task Ratings (1)

Task Times (1)

Focus Group (1)

Salaries (1)

Ordinal (1)

Tasks (1)

Z-score (1)

Interval (1)

Marketing (1)

Visual Appeal (1)

Top Box Scoring (1)

Monte Carlo (1)

Return on Investment (1)

Ethonographic Research (1)

Effect Size (1)

Sample Size Topics


What the NCAA Tournament & Usability Testing Have in Common

What the NCAA Tournament & Usability Testing Have in Common

Jeff Sauro • March 24, 2014

Every time researchers conduct a usability test to uncover problems they're also working with probabilities, even if they tell you they hate math! To understand the role of probabilities in usability testing it helps to see how they are used when picking winning teams for the NCAA tournament.[Read More]


Is Observing One User Worse Than Observing None?

Is Observing One User Worse Than Observing None?

Jeff Sauro • February 25, 2014

It's possible for product stakeholders to be misled from watching a single session during a usability study. This leads teams to have a 2 or more rule when observing. However, watching just a single user still provides information on the impact of an interface problem. Over time, if stakeholders are consistently watching a single random user in each usability study, odds are they will be more likely to see common problems than freak occurrences.[Read More]


5 Reasons You Should and Should Not Test With 5 Users

5 Reasons You Should and Should Not Test With 5 Users

Jeff Sauro • December 17, 2013

There are a lot of misconceptions about when it is and when it is not appropriate to test with five users. Here are five examples of what you can and cannot learn from just a handful of users in a usability test.[Read More]


Best Practices for Using Statistics on Small Sample Sizes

Best Practices for Using Statistics on Small Sample Sizes

Jeff Sauro • August 13, 2013

It's a common misconception that you can't use statistics with small sample sizes (less than 30 or so). Statistical analysis with small samples is like making astronomical observations with binoculars--you are limited to seeing big differences but you can still use the correct procedure to make the most of your data. This blog discusses the latest research on which procedures work for small sample sizes in user research.[Read More]


Five Critical Quantitative UX Concepts

Five Critical Quantitative UX Concepts

Jeff Sauro • September 25, 2012

As UX continues to mature it's becoming harder to avoid using statistics to quantify design improvements. Here are five of the more critical but challenging concepts that take practice and patience but are worth the effort to understand.[Read More]


7 S

7 S's of User Research Sampling

Jeff Sauro • March 13, 2012

Rarely can we talk to all users in the population we're studying, instead we sample. Here are 7 S's to help in your sampling: Simple Random, Starbucks, Stratified, Snowball, Spot, Sequential and Serial sampling.[Read More]


Nine misconceptions about statistics and usability

Nine misconceptions about statistics and usability

Jeff Sauro • March 7, 2012

Many of the reasons people don't use statistics with usability data are based on misconceptions about what you can and can't do with statistics and the advantage they provide in reducing uncertainly and clarifying recommendations. Here are nine of the more common misconceptions I've heard.[Read More]


20 Questions Answered about Unmoderated Usability Testing

20 Questions Answered about Unmoderated Usability Testing

Jeff Sauro • February 29, 2012

After the successful webinar on Best Practices for Remote Usability Testing, we received many questions about how I performed the analysis: sample size questions, time on task and other logistic issues are covered.[Read More]


How to find the right sample size for a Usability Test

How to find the right sample size for a Usability Test

Jeff Sauro • December 7, 2011

What sample size do i need? It's usually the first and most difficult question to answer when planning a usability evaluation. There are actually good ways for estimating the sample size that don't rely on intuition, dogma or conventions.[Read More]


How many customers should you observe?

How many customers should you observe?

Jeff Sauro • February 8, 2011

Observing customer behavior is an excellent way for discovering opportunities for product innovation. The number of customers you need to observe can be determined using the binomial probability formula and will vary depending on how common customer behaviors are and how certain you need to be.[Read More]


How many users do people actually test?

How many users do people actually test?

Jeff Sauro • November 2, 2010

The results of an email survey found 80% of Formative usability tests have less than 15 users. Summative usability test sample sizes are around 3 times larger for respondents who conducted both types of tests.[Read More]


How common are usability problems?

How common are usability problems?

Jeff Sauro • September 29, 2010

Usability problem frequencies from 24 usability tests show that users are almost ten-times more likely to encounter a usability problem in a business application than a website. Users are about half as likely to encounter a problem in consumer software than a business application.[Read More]


Memory versus Math in Usability Tests

Memory versus Math in Usability Tests

Jeff Sauro • August 4, 2010

Confidence intervals, like statistics in general, are powerful because they are both consistent with our experience and provide a level of precision we can't articulate. You should use them with your usability test data.[Read More]


A Brief History of the Magic Number 5 in Usability Testing

A Brief History of the Magic Number 5 in Usability Testing

Jeff Sauro • July 21, 2010

Wondering about the origins of the sample size controversy in the usability profession? Here is an annotated timeline of the major events and papers which continue to shape this topic from 1982-2010.[Read More]


What is a Representative Sample Size for a Survey?

What is a Representative Sample Size for a Survey?

Jeff Sauro • July 15, 2010

This common question mixes two concepts: representativeness and sample size. It is more important to ask a few of the right people what they think than a lot of the wrong people. Once you're talking to the right people identify the highest margin of error you can tolerate to compute the right sample size.[Read More]


What five users can tell you that 5000 cannot

What five users can tell you that 5000 cannot

Jeff Sauro • June 16, 2010

Web analytics has transformed the problem of understanding user behavior from a puzzle to a mystery. Where we once didnít have enough information, we now can have too much to make sense of. Small sample user testing tells helps answer the "why" mystery. There will be a continued demand for user-researchers who can quantify observational data and make the most of analytic data.[Read More]


Will five users really find 85% of all usability problems?

Will five users really find 85% of all usability problems?

Jeff Sauro • May 6, 2010

The sample size formula for finding usability problems only works for a specific set of users and closed-ended tasks. With five users you will only find the more obvious problems.[Read More]


Why you only need to test with five users (explained)

Why you only need to test with five users (explained)

Jeff Sauro • March 8, 2010

For finding usability problems with an interface, testing with five users is fine to find problems that affect 31% to 100% of all users. If a problem is more elusive (affects fewer than 31% of users) then you need to increase your sample size. This sample size does not apply to comparing designs or generating a precise estimate of completion rates or task-times.[Read More]


If 1 of 5 users has a problem in a usability test will it impact 1% or 20% of all users?

If 1 of 5 users has a problem in a usability test will it impact 1% or 20% of all users?

Jeff Sauro • February 1, 2010

Insurance companies do it, drug companies do it and so should usability testers. When you observe a problem from a small sample test, it is unlikely the problem only affects a tiny percentage of users.[Read More]


Margins of Error in Usability Tests

Margins of Error in Usability Tests

Jeff Sauro • August 6, 2009

How many users will complete the task and how long will it take them? If you need to benchmark an interface, then a summative usability test is one way to answer these questions. Summative tests are the gold-standard for usability measurement. But just how precise are the metrics?[Read More]


Sample Size Calculator for a Completion Rate

Sample Size Calculator for a Completion Rate

Jeff Sauro • January 4, 2008

Use this interactive calculator to understand how the sample size changes will affect the confidence interval around a completion rate.[Read More]


Calculating Sample Size for Task Times (Continuous Method)

Deriving a Problem Discovery Sample Size

Deriving a Problem Discovery Sample Size

Jeff Sauro • March 8, 2004

Shows the history and computation of deriving a sample size for discovering problems in an interface.[Read More]

Newsletter Sign Up

Receive bi-weekly updates.
[4231 Subscribers]

Connect With Us

Our Supporters

Use Card Sorting to improve your IA

Usertesting.com

Loop11 Online Usabilty Testing

Userzoom: Unmoderated Usability Testing, Tools and Analysis

About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.