Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

How many people cheat in online surveys?

Jeff Sauro • December 7, 2010

Remote user research has increased the demands for people willing to take surveys, provide website feedback and participate in usability tests.

In exchange for their time, users are compensated.

One drawback to these professional users is that there are some who are in it just for the money.

Consequently, they may not take your study seriously and "speed" through the questions to receive their remuneration.

For open-ended comments and feedback, it's pretty obvious who's not taking your test or survey seriously. Terse comments such as "it's easy" and "everything is great" are good indications of a cheater.

For multiple choice rating scale questions though, detecting disingenuous answers is not as obvious because all responses to rating scales are usually acceptable.

Detecting Cheaters: Please Select a 3

Adding just a single question to your survey is usually sufficient for detecting the most egregious cheating users. A question like "please select a 3 for this question" would easily tell you whether someone is even half-paying attention.

Of course this only nets the respondents who are completely blazing through your survey. Some professional cheaters have caught on to this trick and look for such questions.

Detecting Cheaters: Conflicting Responses

Another option is to include two versions of the same question but with completely opposite wording. For example: "I enjoyed using this website" and "I did not enjoy using this website."

If a respondent agrees or disagrees to both questions then they are probably not taking your questions seriously. There is a chance users will just misread the question even though they were genuinely answering your questions.

Semi-Conflicting Responses

Many questionnaires like the System Usability Scale(SUS) already have questions worded both positively and negatively.

It is possible for respondents to agree to two statements such as "I think that I would like to website frequently" and "I thought there was too much inconsistency in the website." The responses are only somewhat conflicting. However, it is unlikely that a respondent would agree to all 10 items which contain five positive and five negative items.

Looking for all 5's, 4's, 2's or 1's in the SUS questionnaire can also be a way for detecting these cheaters (as suggested in Albert et al 2010).

How common are cheaters?

I looked at five datasets that contained a speeding question (e.g. "Select a 3 here") and 55 datasets using an online administered SUS (see Table 1 below).

# of CheatersTotal Sample Size%
2 25 8%
1 30 3.3%
6 269 2.2%
72 360 20%
236 1962 12%
Table 1: Number of respondents in five online surveys that answered simple
validation questions
incorrectly (e.g. "select a 3 here").

Across the five datasets, the percentage of respondents that answered the cheating question incorrectly range from a low of 2.2% to a high of 20%.

Of the 2646 users tested, 317 (12%) answered the simple question wrong. This number is pulled a bit higher due to one large study[pdf].

To detect cheaters using the SUS, I looked for users that answered with all the same values except 3 (3 is an acceptable neutral response for all SUS items). Across the 55 online SUS datasets, only 7 contained a user that answered the same value for all 10 items.

In fact, of the 950 total users, only 7 (0.74%) responded with all 1's, 2's 4's or 5's. This method is detecting less than 10% of the actual cheaters.

Expect 10% of users to Cheat

When conducting your next online survey or usability test, anticipate somewhere around 10% of the responses from online professional users to be disingenuous. If you're hoping to have 100 usable responses for a study, plan on obtaining 110 and having to throw-out around 10.

Of course the percent of users that cheat depends on a number of factors including how long your survey is, the quality of the panel and even things like the day of the week and time of day. But if your survey is long and complicated, don't be surprised to see that number hit 15-20%.

"Select a response" items are simple and effective

Detecting cheaters is best done with a simple "Select this response" or a conflicting question. Using responses to semi-conflicting items like those in the SUS, are detecting only a fraction of the cheaters.

Unfortunately, cheaters either answer too haphazardly or have caught on to this detection method and avoid picking the same response to all items.

Conflicting questions are also viable but be careful how you word the item. You may inadvertently trip up users who didn't see the "NOT" or just misinterpreted the item.

How many cheaters do you see and how do you detect cheaters in your online surveys?

About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More


UX Bootcamp: Aug 20th-22nd in Denver, CO
Best Practices for Remote Usability Testing
The Science of Great Site Navigation: Online Card Sorting + Tree Testing Live Webinar


You Might Also Be Interested In:

Related Topics

Survey, Crowdsourcing
.

Posted Comments

There are 8 Comments

February 1, 2014 | Nanette wrote:

Perhaps, to minimize "speeders"==pay them much greater. Surveys that pay a palty $10-20 for 30min of intense questions??? you get watcha pay for. You want quality, you pay for it. Curious--are the userability consultants/stats consultants getting paid 10 or 20bucks? Reality. It's what happens, when you stop believing it. 


January 11, 2012 | Survey Cheater wrote:

If surveys paid better, people would not cheat. Seriously the surveys usually pay between $.20-$1.50 and consume more than 30 minuets of time based on the amount of time they are asking the person to give them should make the pay more like $2.50-$4, unless people are suppose to be happy working for an average of $2 an hour, slave workers should expect cheaters.  


February 23, 2011 | Siema wrote:

Aha 


January 18, 2011 | Tony Cox wrote:

I just completed a RSM survey which asked for tasks to be done. There was anrnabandon task option which I used when I could not complete task. I wondered afterrnif the task could be done or if it was impossible so as to catch cheaters. All surveysrnmust have a free comments box for unforseen ideas to be recorded.rn  


January 6, 2011 | Jon wrote:

I beg to differ-if I see a survey where a "Please select '3'" (or a similar question) buried within a group of areas needing a response, I become confused--if you are asking my opinion of something, why would this question even be in the group of possible responses? In some cases, select the "3" as the answer would seem to go against the logic of the originally asked question where the "please select a '3'" appears as a listing needing a response. If this is to truly work in weeding out a "cheater," then you need to word the heading better, such as "Regardless of what is asked of you to rate in the other parts of this question/section heading, please select the number '3' in this case so we can verify that you are indeed reading the questions we wish for you to answer."

 


December 9, 2010 | Jeff Sauro wrote:

Kristine,

Great question. It really depends on the system you're using to collect the data. If you can auto-reject failed responses to the speeder questions then that would be ideal.

Even without that feature, all you need to do is periodically check how many are cheating and then keep your study open long enough to collect more data. 


December 9, 2010 | Kristine wrote:

So do you just throw out cheaters at the end? Or is there a way you can you throw them out during the test and not compensate them? 


December 8, 2010 | Alex Debkalyuk wrote:

Thanks for the info! 


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 1 + 3: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[4078 Subscribers]

Connect With Us

UX Bootcamp

Denver CO, Aug 20-22nd 2014

3 Days of Hands-On Training on User Experience Methods, Metrics and Analysis.Learn More

Our Supporters

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Loop11 Online Usabilty Testing

Use Card Sorting to improve your IA

Usertesting.com

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.