Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

10 Things To Know About The Single Ease Question (SEQ)

Jeff Sauro • October 30, 2012

The Single Ease Question (SEQ) is a 7-point rating scale to assess how difficult users find a task.

It's administered immediately after a user attempts a task in a usability test.
  1. After users attempt a task, ask them this simple question: Overall, how difficult or easy was the task to complete? Use the seven point rating scale format below.



  2. Labels and values: We typically label the end points only and provide numbers from 1 to 7. There are many variations on this (labeling all points, not numbering etc.) but we've found these slight changes are far outweighed by the very salient event of the just attempted task. Users are generally very aware of nuances involved in trying to find information or complete a function and have little problem expressing their frustration or delight.

  3. It works well: Despite its simplicity, we found the SEQ performed about as well or better[pdf] than more complicated measures of task-difficulty like the interval scaled Subjective Mental Effort Questionnaire (SMEQ) or the ratio scaled Usability Magnitude Estimation. That's good considering you can administer the SEQ in any questionnaire software, on paper or aurally.

  4. Ratings of difficulty correlate with other metrics: We've found that in general, the correlation[pdf] between user responses on the SEQ and task-time and task-completion is around r =.5. That is, users tend to rate tasks more difficult if they take longer or don't succeed at all. The correlation is not so strong that any single usability metric is a replacement for another, but it does tell us that the metrics are measuring overlapping things.

  5. Users respond differently: One thing you'll notice when administering the SEQ in particular and most questionnaires in general, is that some users will make everything a 6 or 7 while others will use the full range of the scale  (going from 1's to 7's) within the same study. This sort of behavior can be troubling and leads some to dismiss rating scales altogether.  However, it's very common for people to use rating scales differently, but these differences tend to average out across tasks and products. It's also why we look at the average response relative to a database instead of solely relying on top-boxes scores.

  6. Extremely easy but task-failure: We do observe users who have a horrible time with a task yet watch in awe as they rate the task as extremely easy!  When this happens we all remember, tell our friends and again some unfortunately dismiss rating scales altogether. Yet, in examining data from thousands of responses we find this only happens around 14% of the time. This reminds us that measuring human behavior and attitudes is notoriously difficult but not intractable. We are still able to measure sentiments of usability, just don't expect the instruments to be like thermometers where every rise in the mercury is associated with a rise in temperature.

  7. The average SEQ score is around a 5 : Across the over 200 tasks and 5000 users we find the average score hovers between about 4.8 and 5.1. This is above the nominal midpoint of 4 but is typical for 7 point scales.  

  8. Technology Agnostic: We use the SEQ on mobile devices, websites, consumer and business software and even tasks on paper prototypes. That's the beauty of task-difficulty ratings: users tend to respond to what they expect given the device, fidelity of the interface and nature of the task. It's also why we use the SEQ as a great longitudinal measure from iteration to iteration.

  9. Ask Why? : When users rate a task difficult, it's good to know why they did. When a user provides a rating of less than 5 we ask them to briefly describe why they found the task difficult. This provides immediate diagnostics information right when the user is cognizant of what is driving the poor rating.

  10. Helpful when used alone and in a competitive setting: We find that some tasks are inherently more difficult than others. For example, determining if you have to pay to fix your neighbor's fence if a tree falls on it in a storm is a more complicated task than locating a 32" flat screen TV on a retail website.  It's difficult for users to disassociate the complexity of the task with problems they had trying to complete it. When possible we like to see how users do when attempting the same task on a comparable website to really gauge how difficult the task is relative to its inherent complexity.



About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More


UX Bootcamp: Aug 20th-22nd in Denver, CO
Best Practices for Remote Usability Testing
The Science of Great Site Navigation: Online Card Sorting + Tree Testing Live Webinar


You Might Also Be Interested In:

Related Topics

SEQ, Satisfaction, Rating Scale
.

Posted Comments

There are 1 Comments

November 23, 2012 | uggs outlet wrote:

http://uggsforkids1874.jigsy.com/ 


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 4 + 4: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[3811 Subscribers]

Connect With Us

UX Bootcamp

Denver CO, Aug 20-22nd 2014

3 Days of Hands-On Training on User Experience Methods, Metrics and Analysis.Learn More

Our Supporters

Usertesting.com

Use Card Sorting to improve your IA

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Loop11 Online Usabilty Testing

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.