Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

10 Things to Know about Net Promoter Scores and the User Experience

Jeff Sauro • April 24, 2012

Increasingly companies are adopting the Net Promoter Score as the corporate metric.

In many companies, all metrics, including user experience metrics, should roll up to the Net Promoter Score.

Here are 10 things to know about the Net Promoter Score if you're concerned about improving the user experience.
  1. The Net Promoter Score is a measure of customer loyalty and is based on a single question: How likely is it that you'll recommend this product to a friend or colleague?  The response options range from 0 (Not at all likely) to 10 (Extremely likely). Responses are then bucketed into the following segments.

    Promoters:  Responses from 9-10
    Passives: Responses from 7-8
    Detractors: Responses from 0 to 6

    Subtracting the proportion of detractors from the proportion of promoters and converting it to a percent gets you the Net Promoter Score.  For example, 100 promoters and 30 passive and 80 detractors gets you a Net Promoter Score (NPS) of 9.5% (20 divided by 210). This mean there are 9.5% more promoters than detractors. A NPS of -10% means you have 10% more detractors than promoters.

    Our friends at Satmetrix want us to remind you that Net Promoter, NPS, and Net Promoter Score are trademarks of Satmetrix Systems, Inc., Bain & Company, and Fred Reichheld.


  2. The Net Promoter Score is appealing because of its simplicity (easy to score and a single question) and it's expressed as a percentage which can be more digestible to executives and non-math types than interpreting a mean (e.g. 70% Net Promoters vs 7.9 out of 10). It can be confusing to have a negative percentage and some companies prefer to just call it a "score" and not percentage for this reason. Think of it like net income (which we all know can be negative). It's no different than subtracting two dependent proportions like we explain in Chapter 5 of our book Quantifying the User Experience. 

  3. The main advantage of the Net Promoter Score is that it gets companies thinking about metrics that come from the customer. Yes, revenue is the ultimate metric but revenue is both a lagging indicator and not necessarily a good indicator of future growth--especially when you're pissing off customers to get short term revenue (think of the latest fee from your phone company, cable company or rental-car company). What's more, you can't do anything about last quarter's numbers. If you have a reasonable proxy for measuring future growth and revenue then you might be able to improve next year's revenue. In the processes you also will likely make your customers happier and more loyal!

  4. The main disadvantage of the Net Promoter Score is that it reduces an 11 point scale into a 3 point scale (Detractors, Passives and Promoters). This has two major consequences. First it increases the sample size you need in order to achieve the same level of precision as using the mean. The margin of error is usually around twice as wide compared to using the more conventional approach (mean and standard deviation). Second, it is harder to detect differences between scores, either over time or compared to a competitor.  For this reason I use the raw responses and use means and standard deviations in t-tests and regression analysis.

  5. Despite the popularity and enthusiasm for it being the "Ultimate" question, there might be better questions for your company or industry: Many measures of customer satisfaction and customer loyalty correlate. Reicheld in his 2006 book "The Ultimate Question" pg 28 points out that the likelihood to recommend question was the best or second best predictor of repeat purchases or referrals in 11 out of 14 industries (79%).  Likelihood to revisit, repurchase or reuse might be a better indicator of customer loyalty for your product or industry. I often saw this with business-to-business products I worked on. How likely is it that you'd recommend this non-profit accounting software to your friend? Despite the somewhat irrelevance of the question it still correlated highly with other questions and we were still able to focus on changes over time. So don't throw the baby out with the bathwater.

  6. Don't just collect NPS: The Net Promoter Score might be a good number to track but it's usually the symptom of high or low customer loyalty, not the cause. People are or are not recommending the product, website or service because of something—you need to have a few good candidate questions in your short surveys so you can identify the root causes and improve.  Usually questions about value, quality, usability and a few key features will get you down the right track. You can then conduct a key-driver analysis to determine statistically which features or attitudes are having the biggest impact on Net Promoter Scores.  In one key-driver analysis I conducted for a client, I found the biggest driver of detractors was that emails were being sent too often to customers!

  7. Compare to Benchmarks: The NPS by itself might seem more intuitive than an average score because it is expressed as a percentage, but what makes good, average or poor scores varies a lot by industry (think cable companies versus luxury hotel chains).  For example, the average NPS for consumer software products is 21% compared with about a 6% for cable providers.

  8. Ask "why" for detractors:  If I could ask only one open-ended question on a survey it would be for detractors to briefly explain why they gave a 0-6 response.  You can usually categorize these responses pretty quickly into major groupings.   Often, many of the detractors will say things you can't do much about like "I just don't recommend products to friends" or "I really like the product" and there is almost always some quick fixes and patters in what you can fix.

  9. Ease of Use explains between 30% and 50% of users' likelihood to recommend in software and websites. A large analysis of System Usability Scale (SUS) scores taken along with Net Promoter Scores found that a good chunk of why people recommend is based on their perception of the ease of use. Improving ease of use then should improve loyalty. How do you improve ease of use?  A quick usability test with just a handful of participants will often reveal the most obvious issues.

  10. Not all promoters are created equal. Just because a respondent gives a 9 or 10 on the likelihood to recommend question doesn't mean they will actually recommend. To measure what I call promoter efficiency you ideally track customers over time to see if they actually recommended to a friend. As an alternative, ask respondents in the same survey if they actually have recommended to anyone in the last year and use that as a proxy for their future behavior.  I've included this figure in the NPS benchmark report and on average 68% of promoters report having recommended in the last year (ranging from 43% to 96%).




About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More


UX Bootcamp: Aug 20th-22nd in Denver, CO
Best Practices for Remote Usability Testing
The Science of Great Site Navigation: Online Card Sorting + Tree Testing Live Webinar


You Might Also Be Interested In:

Related Topics

Net Promoter Score
.

Posted Comments

There are 7 Comments

November 15, 2013 | Victor wrote:

Hi Jeff,

You have some great points in your post. What I would like to suggest, however, is to also use surveys. This enables you to get insights in what your customers think, whether it is good (strong points you can focus on) or bad (points you need to improve).

To make all of your lives easier, we have developed www.SuperSimpleSurvey.com. This makes it especially easy for all of you to create beautiful surveys, with which you have the power to pull all kinds of wisdom out of your customer.

There is no limitation on features on the free tier, and allows far more responses than most other sites around. You can even do your own branding - all for free.

Hope that makes some of your lives easier :)

Victor 


October 10, 2013 | Guy Letts wrote:

JeffrnrnThanks for a useful post. rnrnMy experience differs from yours on points 4 and 7. Measurement of NPS is not the goal, so the scale doesn't really matter. It's just the one that they found worked best for customers in their research. The real goal is not measurement but *improvement* of NPS, and that can only be done one customer at a time. The way you use the individual scores is as a temperature check for that person so you fix any problems they highlight (which is why you also need a supporting text question'. The overall score just tells you whether you are being effective, over time, at making all the individual improvements. That's why a broad brush value is just fine.rnrnAnd on comparisons - I've found there's simply no benefit from comparing with others. It's interesting, sure, but it doesn't improve the NPS of a single one of your customers, and it doesn't tell you how to. I wrote more on this here - would be interested to hear what you think! http://www.customersure.com/blog/how-not-to-waste-time-benchmarking-your-customer-service/rn 


August 6, 2013 | Trevor wrote:

11. It's fundamentally wrong to assume that people who state that they are not likely to recommend an organisation are actively bad mouthing that organisation. The question used for the NPS doesn't capture likelyhood to detract, it only captures likelihood to recommend, this is an enormous phenomenological flaw.

12. The NPS assumes that people who respond with a 6 on the scale are equally as unlikely to recommend the organisation as someone who responds with a 0. This just doesn't make any sense at all. Collapsing 11 points of continuous down to 2 categories is like down samling a high resolution colour photograph to a low resolution black and white outline drawing.

13. For many organisations, most respondents will give an answer of 7 or 8. I have witnessed people throwing away 90% of their sample when calculating an NPS. Excluding 90% of responses throws away much of the richness in the data. What if most of that 90% move from a response of 8 to a response of 7 and you throw those responses away and get the same NPS? You wouldn't see the warning signs of a drop in propensity to recommend.

14. The NPS cannot be used for benchmarking because multiple data distributions can all result in the same NPS. I really can't see how the inventors and promoters of NPS couldn't have realised this fundamental flaw. When an NPS is reported to 1 decimal place, there are many hundreds of different data distributions that will deliver an identical NPS.

15. Supporters of the NPS like to market it as a "predictor" of things like growth & profitability. A measure of propensity to recommend, however, is an "outcome" variable. Think about it, it's clearly an outcome of the impact the organisation has had on the consumer. By virtue of it being an "outcome" variable it will correlate with other outcome variables, like growth and profits. Through the basics of science we all know that correlation is not causality. Classifying the NPS as a "predictor" breaks the most basic rules of good science. Again, it's hard to believe that people don't recognise this obvious flaw in the NPS.

So, there are 15 things you need to know about NPS. 


August 6, 2013 | Trevor wrote:

I can provide data sets where there is a statistically significant decrease in propensity to recommend yet the NPS increased. If management use the NPS in decision making, it would negatively impact the organisation. NPS cannot be used for benchmarking, an NPS of 50 can be achieved 26 different ways (from 50% promoters, 50% neutrals and 0% detractors through to 75% promoters, 0% neutrals and 25% detractors). How can organisations with the same NPS be compared when a multitude of different data distributions deliver an identical NPS? The day will come when an employee who is disadvantaged through the use of NPS in KPIs and performance recognition will be able to successfully sue their employer as it is very easy to prove that the NPS is misleading, invalid and unreliable. 


June 7, 2013 | Jeff Sauro wrote:

Thanks for pointing that out! It is indeed 9.5% and I just fixed it. 


June 7, 2013 | Charlie Stuart wrote:

Your calculations in the first point dont sit right with me and just looks plainly wrong. This really stopped me from reading the rest of the article. 


May 28, 2013 | ana pereira wrote:

For example, 100 promoters and 30 passive and 80 detractors gets you a Net Promoter Score (NPS) of 20% ... Is this ok? from my understanding the NPS shoud be 9,5% computed from (100/210)% - (80/210)%  


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 5 + 1: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[3783 Subscribers]

Connect With Us

UX Bootcamp

Denver CO, Aug 20-22nd 2014

3 Days of Hands-On Training on User Experience Methods, Metrics and Analysis.Learn More

Our Supporters

Loop11 Online Usabilty Testing

Use Card Sorting to improve your IA

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Usertesting.com

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.