Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

Associating UX Changes to the Net Promoter Score

Jeff Sauro • June 4, 2013

A bad experience will impact how likely users are to recommend a website or product to a friend. 

Fixing those bad experiences is critical to increasing positive word of mouth.

Unfortunately, there are usually too many things to fix and just as many opinions on what should be fixed. Development teams need to prioritize.

An obvious way to do this is to fix the things that will have the biggest impact on revenue.  One of the reasons A/B tests on websites are so effective and popular is that you can literally see how one interface change can increase or decrease sales through conversions.

It's not always possible to A/B test elements of the user experience, especially in software applications. It's also not always easy to associate interface changes to revenue. The Net Promoter Score is intended to be a proxy for revenue and future growth and is easier to make associations with using user-experience metrics.  You can then indirectly tie changes made through early development research efforts to later measurements of customers' likelihood to recommend through such metrics. 

Here is an approach I've used with clients, including Autodesk, to help associate interface-level user-experience changes to the Net Promoter Scores collected and reported on corporate dashboards.

  1. Obtain a baseline set of Net Promoter Scores: If you aren't doing so already, survey your customers to get a current baseline of how likely customers are to recommend the product to a friend. Ask the 11-point LTR question about both the brand and the product. In fact, you can extend the likelihood to recommend question down to feature and functional areas if you have a lot of functionality.  Include an open-ended question to ask what's driving users to give the rating they gave. Ideally, these surveys should be run monthly or quarterly or set up in some systematic way to collect. You can email customers, use a pop-up on the website, or use a 3rd Party to obtain a perspective. Ideally, use all three approaches as they all provide different lenses of the experience.

  2. Ask a set of standardized usability questions: The SUPR-Q is a questionnaire with an additional 12 items to have users rate, four of which provide a standardized measure of usability.  The other items provide measures of trust and credibility and appearance.  Trust often plays a more central role than usability when driving negative word of mouth about your product or website. For software you can use the System Usability Score (SUS) and we've found just asking a single question about overall ease of use often suffices.

  3. Ask a handful of key questions about features and functions: Many products and websites have vast amounts of features and functions. While you can't expect to obtain detailed metrics for every one of these, you should be able to collect data at a level that allows you to narrow your focus. For example, are poor quality reviews, advertisements, checkout forms or shipping costs driving detractors?  Having respondents rate each of these aspects provides you input for the next critical step of narrowing your focus.

  4. Use a Key Driver Analysis to identify what's driving NPS:  With the NPS scores and items about usability, trust and specific feature areas, you likely have the pool of candidates that are driving word of mouth. To determine which are having the biggest impact, you can use a multivariate technique called multiple regression analysis, also called Key Driver Analysis. It can determine statistically what aspects are having the biggest impact on NPS and allow you to prioritize. The graph below shows an output of a key-driver analysis. Impressions of usability have the biggest impact on users' likelihood to recommend this software product.


    Figure 1: Output of a Key Driver Analysis from a web-based software application. The vertical (y-axis) shows how much each of the items contributes to users' likelihood to recommend. Usability was measured using the four items from the SUPR-Q and is the biggest driver of LTR. It's more than five times as important as Feature A.

  5. Analyze Verbatims: Sort the open-ended comments obtained from the survey into meaningful groupings to get clear examples of where experiences are falling short. We'll often find a handful of no-brainer fixes from quickly reading through comments from participants. Don't let these go to waste.

  6. Design a usability test to uncover interface level changes: Now that you have some idea about possible features or areas of the website that need improvement, conduct a baseline usability test on the identified areas. If the checkout form is driving users to not recommend or return to a website, understand what elements of the checkout form are problematic from the testing.  Are there too many fields, confusing error messages, too many steps, or unanticipated shipping charges? Along with the problems users are having, collect baseline metrics on efficiency (time to complete), effectiveness (did they complete) and satisfaction (questionnaires). Ask the NPS question along with the same set of items like the SUPR-Q obtained in the baseline survey. This allows you to see how well scores from simulated studies match your higher-level baseline data.

  7. Make design changes: Based on the findings of the usability tests, make changes in the interface to improve the experience.  It's better to do this iteratively than to put all your testing and design efforts into a one-shot fix and test approach.

  8. Conduct a follow-up usability study: Using the same set of tasks and metrics, see if there is evidence that the usability problems identified have been corrected and the metrics have improved.  You can do this on even small sample sizes (10 or so users) but you will be limited to detecting only large differences (for example, 20%-30% point differences in completion rates).

  9. Conduct the follow-up survey: Using the same system and set of questions you used to obtain the baseline NPS data, collect more data from users who have had time to experience the new interface. Compare the old scores to the new scores statistically to see if the improvements are beyond what you'd expect from chance variation.  The Net Promoter Score is based on two paired proportions and has margins of errors that tend to be about twice as wide as simple proportions. Don't make the mistake of making decisions on random variation. You may need sample sizes in the high hundreds to low thousands to determine if 5-10 point shifts in NPS are statistically significant.

Linking low-level interface changes to high-level customer attitudes isn't an exact science. There are many variables that impact why users do and don't recommend the website or product. What's more, you can make massive improvements in the user interface but not see those reflected in new Net Promoter Scores.  This is usually because other factors are having a much larger impact on recommendations. It could be different tasks, parts of the interface, different types of users, or things outside the control of most UX departments, like pricing, compatibility or features.  

The approach outlined above, however, does provide a framework for capturing the effects of improving the interface and associating those improvements to more macro attitudes reflected in the Net Promoter Score.


About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More


UX Bootcamp: Aug 20th-22nd in Denver, CO
Best Practices for Remote Usability Testing
The Science of Great Site Navigation: Online Card Sorting + Tree Testing Live Webinar


You Might Also Be Interested In:

Related Topics

Net Promoter Score, Survey
.

Posted Comments

There are 3 Comments

July 18, 2013 | Your friend at HBR wrote:

Net promoter is as dead as the Nehru jacket. ;) 


June 16, 2013 | Henrietta wrote:

The geinus store called, they're running out of you. 


June 11, 2013 | Anna wrote:

NPS looks like an simple instrument but isn't easy to use nor easy to interpret in daily business. We use in our company regularily to monitor customer satsifaction. Here are some drawbacks:

Think carefully HOW you ask for data. Via online survey, email, phone call or does a person ask during a meeting? We have noticed that best NPS results can be found if a person asks directly.

Think carefully WHO you ask. Do you ask always the same person? Or do you ask different role player? A sales man finds your usability horrible but a scientist/product owner does not. You ask twice, sales man first, later the scientist - what does that mean for your results?

Think carefully WHEN you ask. The context matters a lot. We always aks three times: When we deliver a product, when the product has its first routine maintenance and one year after delivery. The first NPS questions come up with the bill as a letter and always produce the worst results. The second questions are asked by service people and always produce best results.

Maybe it is useful to use the NPS for customer products and a hugh target group. But I would not recommend ist for B-to-B with very small target groups. 


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 4 + 5: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[3809 Subscribers]

Connect With Us

UX Bootcamp

Denver CO, Aug 20-22nd 2014

3 Days of Hands-On Training on User Experience Methods, Metrics and Analysis.Learn More

Our Supporters

Loop11 Online Usabilty Testing

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Usertesting.com

Use Card Sorting to improve your IA

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.