Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

Measuring & Analyzing Task Times

Jeff Sauro • September 17, 2004

Continuous Data

Efficiency is one of the cardinal aspects of a product's usability. The amount of time it takes for a user to complete a task is one of the best predictors of efficiency because it:
  1. Illustrates the slightest variability (in seconds) that are more difficult to detect in other measures like errors
  2. Is a continuous measurement meaning it can be subdivided infinitely (unlike task completion which is discrete-binary) and many more statistical tests can be used to detect meaning in your data.

How to Measure Task Time

It's very important to know there are many complexities when deciding to measure task times. Upon initial consideration it may seem like a straightforward measurement: Get a stop watch and start timing when the user starts the task and stop when they're done and record the time.

When to Start and When to Stop

The first problem you will encounter is when to start the timing: Is it when the user is handed the scenario and task description? Or is it when the user first clicks on something? And when do you stop? When the user says they're done or when they technically completed the task but before they spent time double checking their work?

There are different definitions of what constitutes task times. The most important thing is to be consistent and note your timing standard. I've settled on the following:

Start Time: The user's first orientation toward the application. That is, as soon as the user finished digesting the scenario (suggesting they understand the task ) and they make some orientation toward the monitor, usually with the mouse.

End Time: When the user signals they're done (either verbally or through some non-verbal means like writing "Done" on the task page). Other methods might include having the user turn the monitor on and off between tasks. I prefer to start timing after I think the user understands the tasks (as opposed to when they begin reading the task) similar to how they would behave on their job: they know what they need to do they just need to "do it" in the software. It's best to minimize the confounding effects of reading and comprehension speeds.

Technically Done and "I'm Done"

I also find it helpful to note when the user technically completes the task and when they state they completed the task. You want to be able to decipher the timing data and identify the following causes for the differences between technically complete and subjectively complete:

  1. Hawthorne Effect: In some cases users are just being thorough because they know they're being tested(regardless of what you tell them otherwise) so they will double-check unnaturally.
  2. Software isn't reassuring enough: Some tasks don't provide the user with enough information for them to know if they successfully completed the task. Sometimes a screen may be lacking a confirmation message.
  3. System Lag Time: Some"CPU" intensive tasks that take more than a couple seconds to complete. Note when the user completes the task and when the system completes the task. It's also interesting to note what the user does during the perceived eternity while watching the hour glass.

About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More


UX Bootcamp: Aug 20th-22nd in Denver, CO
Best Practices for Remote Usability Testing
The Science of Great Site Navigation: Online Card Sorting + Tree Testing Live Webinar


You Might Also Be Interested In:

Related Topics

Task Time
.

Posted Comments

There are 1 Comments

October 21, 2011 | Howdy wrote:

I really appreciate free, scucinct, reliable data like this. 


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 2 + 1: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[4114 Subscribers]

Connect With Us

UX Bootcamp

Denver CO, Aug 20-22nd 2014

3 Days of Hands-On Training on User Experience Methods, Metrics and Analysis.Learn More

Our Supporters

Usertesting.com

Loop11 Online Usabilty Testing

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Use Card Sorting to improve your IA

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.