Can users self-report usability problems?
Jeff Sauro • October 6, 2010
doesn't have to be expensive, time consuming or involve lots of users.
Jakob Nielsen popularized this discount approach
two decades ago. A focus on finding and fixing problems by testing early and often with small-samples generates major insights.
More recently Steve Krug
has taken this informal approach to the masses by encouraging website owners to spend a few minutes a month watching users(or your neighbor) attempt tasks.
Nielsen and Krug make a convincing case that a little effort put on usability can result in major benefits. In fact, they have made such a compelling case, you might wonder if we can eliminate the UX middleman altogether
. Can users just report the problems themselves?
Have the users do the testing
Having users self-report problems is different than having users complete usability tests remotely. In a typical unmoderated remote test using a popular service like usertesting.com
, users are asked to attempt tasks and provide feedback.
As efficient as remote unmoderated test are, someone still needs to review the recorded sessions, sift through user comments, document the usability problems, and come up with solutions. The process is still time consuming.
Having users self-report usability problems involves asking users to complete the same tasks as they would in a lab-based or remote unmoderated test. The difference is that users document the problems, where they encountered the problems, and assign severity ratings. Problem recording mechanisms can be as low-tech as a Word document or as high-tech as a real-time web-based reporting system.
Users Report Half the Problems Experts Do
A few researchers have examined the effectiveness of self-reported usability problems. It turns out users do reasonably well. On average, users report about half the problems trained usability professionals find while watching users in a lab (Castillo 1998 et al; Bruun et al 2007; Andreasen et al 2009). This applies to both critical and severe problems. This method appears to work well on both software (e.g Mozilla Thunderbird) and websites (e.g. IMDB.com).
Users Find Problems Experts Don't
Not only are users reasonably good at finding the same problems as seasoned professions, they also find problems experts don't. This is especially likely when looking for learnability issues where users tend to have a different mental model of the system than experts.
Users Don't Seem to Mind
In the analysis by Castillo, users didn't seem to mind spending time describing and reporting the problems. Even users who gave the longest and most detailed reports said they felt that reporting problems didn't interfere much with performing the tasks. They also didn't feel like the reports needed to be anonymous.
Some Tips on Self-Reporting Problems
Here are some tips on having users self report problems.
- Your website or software needs to be functioning reasonably well and accessible remotely.
- Allow users to report anonymously (even though most probably won't mind being identified)
- Provide some guidelines or training on how to report the problems. A short document, presentation or video would suffice.
- Compensate your users like you would for any remote usability test.
- Have users report some combination of the following
- Description of the usability problem
- Severity of the Problem (1 = Trivial to 5=Critical)
- Where the user encountered the problem (URL or screen)
- What the user was doing (which task)
- Expectation about what should have happened
- Could the user recover? If so how?
- Possible design/programming solutions to the problem
Self-Reporting is no Panacea
Having users report usability problems can be another effective tool for gathering low cost feedback. For example, heuristic evaluations
work best when done with multiple evaluators. This can be hard if you're the only UX person on a product (or company). Having users self-report problems in addition to a heuristic evaluation can be an effective strategy.
Self-reporting problems isn't likely to replace the need for usability professionals anytime soon. This approach can't replace traditional user testing or heuristic evaluations: users still miss about half the problems, especially in more complex systems (although, there are usually more problems than development teams can get to).
So if you were looking for an excuse to get rid of us pesky UX professionals, sorry, it appears our jobs are safe for now!
- Castillo, J. C., Hartson, H. R. and Hix, D. (1998) Remote usability evaluation: Can users report their own critical incidents? Proceedings of CHI 1998, ACM Press, 253-254 (See also Virginia Tech Remote Usability Evaluation)
- Fu, L., Salvendy, G., & Turley, L. (1998). Who Finds What in Usability Evaluation. In Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting (pp. 1341-1345).
- Petrie, H., Hamilton, F., King, N. and Pavan, P. (2006) Remote usability evaluation with disabled people. Proceedings of CHI 2006, ACM Press, 1133-1141
- Bruun, A., Gull, P., Hofmeister, L., and Stage, J. (2009). Let your users do the testing: a comparison of three remote asynchronous usability testing methods. CHI 2009, ACM Press pp1619-1628
- Andreasen, M., Nielsen, H., Schrøder, S., Stage, J.(2007) What happened to remote usability testing?: an empirical study of three methods[pdf], Proceedings of CHI 2007, ACM Press