Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

Survey Respondents Prefer the Left Side of a Rating Scale

Jeff Sauro • September 14, 2010

Subtle changes to response items in surveys and questionnaires can affect responses.

Many of the techniques for item and scale construction in user-research come from marketing and psychology. Some topics can be controversial, sensitive or confusing and so having the right question with the right response options is important.

Attitudes about usability aren't typically controversial so you're likely to get more honest answers. Consequently, slight changes to item wording and the number of scale-steps are less likely to lead to major difference in scores. Nevertheless it's important to understand some of those effects when creating and analyzing scales in questionnaires and surveys.

While there are many caveats and exceptions when creating response items, one effect is that respondents tend to favor the left side of a response scale.   Take the following two response options:

My College has an excellent reputation
 Strongly-Disagree Disagree
Undecided
Agree
 Strongly-Agree
         

   versus

My College has an excellent reputation
 Strongly-Agree Agree
Undecided
Disagree  Strongly-Disagree
         
   

More students agreed with the second response option than the first [pdf].  The only difference is the order in which the response options are presented (agree or disagree first).  If you code the values from 1 to 5 for the first scale and 5 to 1 on the second scale then you'll have a higher average score on the second response option.

This phenomenon also held up when a general population rated the qualities of beer using opposite adjectives, personal distress ratings[pdf], and when rating preferences for products A vs B or B vs A.  Once again, respondents have a slight bias to items presented first (on the left side of the scale).

Examples of both scale directions can be found in usability questionnaires.  Jim Lewis's PSSUQ[pdf] goes from Agree to Disagree and the System Usability Scale goes from Disagree to Agree.

How large is the Left-Side Bias?

It's important to keep in mind that this and many other effects you get from changing wording, question direction, labeling and the number of scale steps is small. For example, a typical difference is something like .2-.3 of a point difference (on a 5-point scale) or about 1/3 of a standard deviation difference.

You won't start seeing these differences until your sample size exceeds 100 or so.  As with most effects on response scales, the bias is not universally present in all scales[pdf] and appears to occur more when the item being rated is phrased positively.

When measuring attitudes toward usability (which is usually not a sensitive or politically charged subject) it is usually the case that the effects of unusable interfaces outweigh nuances in questionnaire design. For example, using extremely worded items or questions will have a much larger impact on the responses.

Why the Bias?

Research suggests that it is something about both the participants and the items that cause the left-side bias. It is hypothesized that it has to do with participant motivation, reading habits, and education level in conjunction with a primacy effect, the clarity of the items and specificity of situations.

Key Take-Aways:

  • A dishonest researcher who wants responses to be slightly higher in agreement can place the favorable response options on the left.
  • If you report top-box or top-two box for a stand-alone survey (no comparisons) then putting agree on the left-side will inflate the response a bit.
  • If you are comparing the responses to past or future responses, don't worry—whatever bias exists in the responses it will occur in both surveys. Comparisons are always more meaningful than stand alone results.
  • You will only likely notice a difference if your sample size exceeds 100 responses in each group.
  • One is not necessarily right or wrong—if you have an existing scale stick with it.

References

  • Chen, J. (1991) "Response-Order Effects in Likert-Type Scales" Educ. and Psychological Measurement; v51 pp531-540
  • Holmes, C. (1974), "A Statistical Evaluation of Rating Scales," Journal of the Market Research Society, 16 (April), 87-107.
  • Friedman, H. & Amoo, T., (1999) Journal of Marketing Management, Vol. 9:3, Winter 1999, 114-123.
  • Friedman, H. H., P. J. Herksovitz and S. Pollack, (1994) "Biasing Effects of Scale-Checking Styles on Responses to a Likert Scale," Proc, of the American Statistical Association Annual Conference: Survey Research Methods, pp. 792-795
  • Weng, L., Cheng, C., (2000) "Effects of Response Order on Likert-Type Scales" Educ. and Psychological Measurement; v60; 908

About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More


UX Bootcamp: Aug 20th-22nd in Denver, CO
Best Practices for Remote Usability Testing
The Science of Great Site Navigation: Online Card Sorting + Tree Testing Live Webinar


You Might Also Be Interested In:

Related Topics

Rating Scale, Survey, Questionnaires
.

Posted Comments

There are 5 Comments

November 23, 2010 | Jeff Sauro wrote:

Beverly,
There's not anything necessarily wrong with using 1-4 in a rating scale. This research makes it clear that the left-side will generally get a higher response despite its labels or numbers.

Including or not-including a middle or neutral response option is the subject of much debate and research and I'll have more to say about it in a subsequent blog post.  


November 22, 2010 | Beverly Taylor wrote:

what's wrong with using 1 - 4 and not giving people a 'middle/ok' choice? 


September 19, 2010 | tedd wrote:

Nice article. I wonder if the bias is due to most web sites having left navigation, or something tied to dominate right/left-hand orientation, or a instinctual built-in preference for left items being approved more so over right, or if is this tied to a left-to-right custom writing style. Lot\'s of things to consider.

Not so much a comment about the article, but rather a comment about the page. The page fails w3c validation big time. Additionally, the first post demonstrates that the users submitted data was stored in the database with html entities escaped (good) but shown to the public in raw form (bad). These are simply examples of bad coding. If you want a further explanation, please contact me. 


September 18, 2010 | Jeff Sauro wrote:

That's a good question and probably very relevant. The research I cite here is for both English and non-English speakers in the US, Europe and Asia, however I believe all languages represented read left to right.

I suspect for a right-to-left language we'd see an opposite effect (which is what I believe you're wondering). For example, the study Belson, W.A. (1966), "The Effects of Reversing the Presentation Order of Verbal Rating Scales," Journal of Advertising Research, 6 (December), 30-37 found a top-sided bias when the scales were presented vertically.

A frequent hypothesis for why this bias exists has to do with the distance from the initial focus of reading the question to responding. In left-right then one would expect the bias to occur on whatever item is closest to the last eye-position of the question. 


September 18, 2010 | Rob Crowther wrote:

Is there any research which suggests the effect is reversed for populations who read right to left? 


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 1 + 5: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[4183 Subscribers]

Connect With Us

UX Bootcamp

Denver CO, Aug 20-22nd 2014

3 Days of Hands-On Training on User Experience Methods, Metrics and Analysis.Learn More

Our Supporters

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Use Card Sorting to improve your IA

Loop11 Online Usabilty Testing

Usertesting.com

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.