Measuring Usability
Quantitative Usability, Statistics & Six Sigma by Jeff Sauro

Predicting Net Promoter Scores from System Usability Scale Scores

Revisiting the Regression Reveals a Simple Predictive Rule-of-Thumb

Guest Post By Jim Lewis • January 3, 2012

Introduced in 2003 by Fred Reichheld, the Net Promoter Score (NPS) has become a popular metric of customer loyalty in industry. 

The NPS uses a single Likelihood to Recommend question ("How likely is it that you would recommend our company to a friend or colleague?") with 11 scale steps from 0 (Not at all likely) to 10 (Extremely likely), as shown below. 



In NPS terminology, respondents who select a 9 or 10 are "Promoters," those selecting 0 through 6 are "Detractors," and all others are "Passives".  The NPS from a survey is the percentage of Promoters minus the percentage of Detractors, making the NPS a type of top-box-minus-bottom-box type of metric (actually, top 2 minus bottom 7 boxes) thus, the "net" in Net Promoter. 

For example, suppose you've collected 100 LTR ratings for a company for which 25 ratings fall between 0 and 6 (25% Detractors), 25 fall between 7 and 8 (25% Passives), and 50 fall between 9 and 10 (50% Promoters).  The resulting NPS is the percentage of Promoters minus the percentage of Detractors, in this case, 25%.  The developers of the NPS hold that this metric is easy for managers to understand and to use to track improvements over time, and that improvements in NPS have a strong relationship to company growth[pdf]. The metric becomes especially valuable when compared to industry benchmarks.
 
Since its introduction, the NPS has generated controversy. For example, Keiningham et al. (2007, 2008) challenged the claim of a strong relationship between NPS and company growth.  In general, top-box and top-box-minus-bottom-box metrics lose information during the process of collapsing measurements from a multipoint scale to percentages of a smaller number of categories, and thus lose sensitivity (although increasing sample sizes can make up for lack of sensitivity in a metric).  

Despite these criticisms, it is unlikely that the popularity of the NPS will diminish at any time in the near future due to its simplicity and intuitiveness.

* Our friends at Satmetrix want us to remind you that Net Promoter, NPS, and Net Promoter Score are trademarks of Satmetrix Systems, Inc., Bain & Company, and Fred Reichheld

The System Usability Scale (SUS)

Despite being a self-described "quick and dirty" usability scale, the System Usability Scale (SUS), developed in the mid 1980s by John Brooke, has become a popular questionnaire for end-of-test subjective assessments of usability.

The SUS accounted for 43% of post-test questionnaire usage in a recent study of a collection of unpublished usability studies. Research conducted on the SUS has shown that although it is fairly quick, it is probably not all that dirty. 

The Initial Regression Equation from January 2010

Two years ago we published a regression equation for predicting someone's likelihood to recommend (LTR) a product given their System Usability Scale (SUS) score.  That equation was:

LTR = 0.52 + 0.09(SUS)

In other words, to convert a SUS score (which ranges from 0 to 100), into an LTR rating (which ranges from 0 to 10), you'd take 9% of the SUS score then add about .5.  Analysis of the regression indicated that the SUS scores explained about 36% of the variation in LTR ratings (which corresponds to a statistically significant correlation of about .6 between SUS and LTR).

Revisiting the Regression Equation

After publishing the initial equation (for which n = 146), we continued collecting LTR and SUS data, increasing the number of individual pairs of scores to just over 2200 (distributed over 81 companies with sample sizes ranging from 4 to 113).  With this new data added, the resulting regression equation is:

LTR = 1.33 + 0.08(SUS)

Although the parameters of the equation are somewhat different, this equation isn't dramatically different from the initial one.  The intercept is somewhat greater (1.33 instead of 0.52) and the slope is a little less steep (8% instead of 9%).  The percentage of variation in LTR explained by SUS is slightly higher (about 39%, corresponding to a statistically significant correlation between LTR and SUS of .623).

When you change the data from which you derive a regression equation, you expect some change in the parameters, so this shouldn't be shocking news especially with this sample size roughly 15 times the size of the initial sample.

Simplifying the Regression Equation

The good news is that the regression equation you get from applying standard least squares methods provides a constant and a slope that guarantees minimal prediction error for the data used to provide the estimates.  The bad news is that the resulting equation isn't likely to be easy to remember. 

One of the things Jeff and I noticed with both regression equations was that the slope was almost equal to 0.1 (10%), so we wondered what would happen to the quality of the regression equation if we dropped the intercept (mathematically, forcing its value equal to 0).  If it turned out that this changed the slope to 10%, then it would result in a very easy to remember relationship between LTR and SUS if you know the SUS score, just divide it by 10 to get an estimate of the user's likelihood to recommend. 

LTR = 0.1(SUS)
LTR = SUS/10

Whenever you deviate from the parameters indicated by least squares regression, you expect the quality of the regression formula as measured by its coefficient of determination (the percentage of variance explained) to decline.  The question was how much quality we would lose as a consequence of this simplification. 

It turned out that the percentage of variation in LTR explained by SUS for the simplified equation was about 37% (corresponding to a statistically significant correlation of about .606) a drop of only 2%.  If you don't remember the updated equation, you will get almost as good a prediction with the easier-to-remember simplified equation.

Using the Regression Equations

If you have existing SUS scores from usability evaluations, you can use either of these regression equations to estimate LTR, and from those LTR estimates, compute the corresponding estimated NPS.   A shortcut calculator is provided below which will also convert the LTR Score to a Net Promoter Score.

SUS to Net Promoter Score Converter

Enter a mean to get an estimated Net Promoter Score or enter an NPS to get the estimated mean.

A SUS score of converts to an NPS of approximately
38%


OR

An NPS of % converts to a SUS score of approximately   
82.2


      
           
 

This can be helpful if your company is using NPS from other voice-of-the-customer sources as a consistency check.  If you're serious about using LTR in the future, it is a small effort to collect ratings of the NPR item in addition to the SUS rather than estimating it. 


About Jim Lewis PhD : Jim has worked as a human factors engineer and usability practitioner at IBM since 1981. He has published influential research on the measurement of usability satisfaction, use of confidence intervals, and sample size estimation for usability studies. 

He is a BCPE Certified Human Factors Professional, an IBM Master Inventor, and a member of UPA, HFES, APS and APA. He is the author of Practical Speech User Interface Design (Taylor & Francis 2011) and is co-author of the forthcoming book Quantifying the User Experience (Morgan Kaufmann 2012).



About Jeff Sauro

Jeff Sauro is the founding principal of Measuring Usability LLC, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 4 books on statistics and the user-experience.
More about Jeff...


Learn More



You Might Also Be Interested In:

Related Topics

Net Promoter Score, SUS
.

Posted Comments

There are 13 Comments

April 22, 2014 | Steve wrote:

I am trying to understand the legal implications of NPS and LTR. If Satmetrix trademarked NPS, is it safe to assume nobody, but Satmetrix can use that metric? On the other hand LTR seems to be a generic version of the same metric and can be used in studies by anyone. Is it correct? 


February 11, 2014 | CB wrote:

After what type of 'usability evaluation' do you give the SUS in order to accurately predict NPS? When the NPS question is administered, it is intended to measure the users' entire experience with a company and/or product, including (where applicable) things like purchasing, installation, product usage, support, renewal, product replacement, etc. After a typical usability test, the user may have been exposed to a few scenarios or use cases, usually dealing with direct product use (not exposing marketing or support or many other aspects of the user experience). How can a SUS evaluation of the limited experience during a usability test consistently predict with accuratcy the full end-to-end experience which NPS intends to measure?  


August 9, 2013 | David wrote:

For people worried about using NPS as a usability score - I think you maybe have it backwards. In my experience we have used this regression equation to go from SUS scores (of systems under development) to give product managers a prediction of the NPS score of their product if they do nothing. NPS is a very poor usability metric, but SUS can be a tolerable predictor of customer satisfaction.  


December 18, 2012 | Renee wrote:

Hi, would it be possible to obtain more information about the methodology and the regression findings (standard error of estimate, whether equation is significant etc). I'm also interested to understand how you convert a predicted LTR score into an NPS score. Thank you. 


June 1, 2012 | Martin Talbot wrote:

Hi, is this work on Predicting Net Promoter Scores from System Usability Scale Scores published in a conference or journal? If so, could you please forward me the reference. Thank you. 


April 17, 2012 | Chris H wrote:

You really need to understand the context before considering using NPS. It should never be used as the key metric of usability. A site can have a great user experience but still not get a good NPS score for three key reasons 1)Relevancy (you'd only recommend sites to friends / colleagues if they share the same interests / needs), 2) Ubiquity (Would you recommend Google for search when everyone already knows about it?), 3) Not being weird (e.g. would you recommend your bank's website to your friends?) 


March 14, 2012 | Jeff Sauro wrote:

Nikolaj, Thanks for the comment.

The results are also published in our book: Quantifying the User Experience: Practical Statistics for User Research in Chapter 8 on pg 230 in a Sidebar. 


March 14, 2012 | Nikolaj wrote:

HirnrnHave you guys had your findings published? I would love to use it in my master thesis, but it does carry more weight with my examinators if it is a published paper. 


February 8, 2012 | John Romadka wrote:

I like Anthony's question, and I understand your response to him. I have a follow up question: If you convert the NPS to a SUS score (and you know you're only getting 33% of what you want), and want to use it as a baseline to later compare at a later point in time (using the full SUS), can you express the "NPS converted to SUS" score with a margin of error to account for the "loss of info" as a means of qualifying the score.

Example: Let's say I have a 100 NPS scores from 100 different products from last year, and no SUS was included. I would like to use those NPS (converted to SUS) from last year as a baseline, with the intention of running SUS studies this year for the same 100 products. Can I express the "baseline NPS converted to SUS" as simply a larger margin of error? As you said, it's better than nothing at all. 


February 8, 2012 | Jeff Sauro wrote:

Anthony

Good question. You CAN use only the NPS single question as a substitute for SUS as there is a strong correlation. HOWEVER, I don't recommend it.

The reason is, a strong correlation in this case means you're only capturing about 36% of what you'd get from using the SUS. That is, the NPS loses about 66% of what the SUS measures. This speaks to the Adjusted-R-Square value (the correlation coefficient squared).

In social science research, a correlation above .5 is considered high (the same correlation as the SAT and college grades for example).

So you could just use the NPS but know that you're losing a lot of the information. Whether that's OK with you largely depends on the context, after all, asking 1 question and getting 33% of what you want is probably better than asking no questions and getting 0% of what you want. 


February 3, 2012 | Anthony wrote:

Great post! Jeff and Jim, I'd love your opinion about this: Can I just ask the NPS question at the end of a usability session? It's "quicker" than SUS...is it much "dirtier"? 


January 11, 2012 | Jeff Sauro wrote:

Thanks John. Fixed that bug. 


January 10, 2012 | John Romadka wrote:

Great article. However there seems to be a bug in the javascript. The SUS part of the calculator works, but not the NPS.

 


Post a Comment

Comment:


Your Name:


Your Email Address:


.

To prevent comment spam, please answer the following :
What is 1 + 1: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[4209 Subscribers]

Connect With Us

.

Jeff's Books

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.