Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

The 3 Rís of Measuring Design Comprehension

Jeff Sauro • April 23, 2013

Will users get it?

Marketing and design teams often want to know if users will understand a key concept on a website or design.

For example, do users understand new terms and conditions, a privacy policy, different product models, prices or the service packages properly?

When you want to know if users will understand something in a design, you can quickly see how asking "Did you understand the difference in our service plans?" isn't a good idea.

It's unlikely that more than a few participants will acknowledge that they don't understand something. It would be like asking students in a math class if they understand the concept of logarithms.  It's much better to ask students to find the logarithm of 1000.
 
To that end, to really measure comprehension, you need to use questions that assess users' knowledge of the policy, the product or design.

We use three complimentary techniques: measuring recognition, recall and recounting.

Recognition

Recognition measures the ability of a user to correctly identify an item among a set of alternatives. This is measured using the classic multiple choice question format. When a user can correctly select an item from a list of multiple choice questions, it demonstrates at least a superficial level of comprehension.

For example, if you want to know whether users understand a new cancellation policy for a service, you can ask them to review a product page and then answer some questions that include something like the following:

Which of the following options best represents the service cancellation policy?
A.    All sales are final.
B.    You can cancel any time after 90 days.
C.    You can cancel at any time.
D.    You can cancel any time within the first 30 days.

If a user selects the correct choice "C", this reflects a certain level of comprehension. But it's unclear if the participant would have provided this answer without considering the alternatives and getting a quick mental cue from seeing the correct answer. Guessing also complicates matters. With one correct choice and three incorrect alternatives, there is a 25% chance of randomly selecting an item correctly. 
 
Adding additional multiple choice questions about the cancellation policy can certainly help this, and that's why standardized tests aren't merely single questions. The probability of correctly guessing three questions correcting is .253 or  about 2% .

For unmoderated usability testing, we often need to verify task completion rates and use a multiple choice verification question. It usually asks participants to provide the price or description of a product we asked them to search for. That means in many respects, the lower limit of task completion rates would be closer to 25% than 0% (for multiple choice questions with four options). 

So while adding many multiple choice questions offsets the problems of guessing and poorly worded questions and answers, we can't subject participants to long batteries of SAT-like questions in the world of applied user research. We use complimentary approaches instead.

Recall

Recall is the users' ability to pull the correct answer from their memory without any prompting or cues. This is usually measured by having participants answer open-ended questions. 

Open-ended questions that require users to correctly recall the cancellation policy terms or the name of a feature provides evidence for a deeper level of comprehension than recognition.  Asking a participant to recall the cancellation policy would entail a question such as:  
What is the cancellation policy for the software service?

While we largely eliminate the problem of guessing, open-ended questions have their own issues. They take longer to analyze and introduce an additional layer of subjectivity and differing interpretation.  

Recounting

Sometimes we not only want to understand whether users understand specific aspects of a product or service, but we want to know what features or details are most important and memorable in the mind of the user.

Instead of asking a participant to summarize what they understand, we ask them how they would explain what they saw to a friend or colleague. For example, "How would you explain the service and cancellation policy to a friend who was considering this service?" By asking a participant to rephrase things to a non-present friend forces them to not rely on jargon or half-baked terms. 

This approach helps not only to assess a deeper level of comprehension but also to assess what features stand out, and in the users' language. The verbatim responses provide a great opportunity to determine what branded terms are being used. 

Rarely can we assess whether users "get" a concept, feature or detail with a single question or by asking them directly. Using a mix of multiple choice questions (recall), open-response questions (recognition) and recounting questions provides a balanced view of what users understand and what they don't comprehend. We find this approach works well for measuring abstract software concepts, terms and conditions, pricing structures, upgrades, product tiers, service plans, and branded features.


About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More


UX Bootcamp: Aug 20th-22nd in Denver, CO
Best Practices for Remote Usability Testing
The Science of Great Site Navigation: Online Card Sorting + Tree Testing Live Webinar


You Might Also Be Interested In:

Related Topics

Survey, Methods
.

Posted Comments

Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 5 + 5: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[4195 Subscribers]

Connect With Us

UX Bootcamp

Denver CO, Aug 20-22nd 2014

3 Days of Hands-On Training on User Experience Methods, Metrics and Analysis.Learn More

Our Supporters

Usertesting.com

Loop11 Online Usabilty Testing

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Use Card Sorting to improve your IA

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.