In 2010, the Harvard Business Review published an article that challenged a fundamental assumption in customer experience: that delighting customers is the key to loyalty. The research, based on a study of more than 75,000 customer interactions, found that delight has a surprisingly small effect on loyalty. What has an enormous effect is effort.
Customers do not reward you for exceeding expectations. They punish you for making things hard.
This insight gave rise to the Customer Effort Score (CES)---a metric designed to measure how easy or difficult it is for customers to accomplish what they set out to do. And while NPS and CSAT remain the most widely tracked customer experience metrics, a growing body of research suggests that CES may be the most predictive of future behavior.
This guide covers everything you need to know about CES: what it measures, when to use it, how it compares to other metrics, and how to implement it in a way that actually reduces friction and improves retention.
Customer Effort Score measures the ease of a customer interaction. The standard CES question asks:
βTo what extent do you agree with the following statement: [Company] made it easy for me to handle my issue.β
Respondents answer on a 7-point scale from βStrongly Disagreeβ (1) to βStrongly Agreeβ (7).
Some implementations use a 5-point scale or phrase the question differently (βHow easy was it toβ¦?β), but the core idea is the same: you are measuring perceived effort rather than satisfaction or likelihood to recommend.
Satisfaction and effort are related but distinct concepts. A customer can be satisfied with the outcome of an interaction but frustrated by the effort it required. They got their refund, but it took three phone calls and 45 minutes on hold. They resolved their issue, but they had to explain it to four different people.
These high-effort resolutions create what researchers call βdisloyaltyβ---a weakened relationship that makes the customer more susceptible to competitive offers and more likely to share negative word-of-mouth. The customer may report being βsatisfiedβ with the resolution, but their actual loyalty has eroded.
CES captures this distinction in a way that CSAT does not. A high CSAT score with a low CES score is a red flag: you are resolving issues but making customers work too hard to get there.
The original HBR research that launched CES found several counterintuitive results that have been reinforced by subsequent studies:
96% of customers who had a high-effort experience reported being disloyal, compared to only 9% of those with a low-effort experience. This is a 10x difference in disloyalty rates. No other customer experience metric produces a gap this large.
The research found that customer service interactions are four times more likely to drive disloyalty than loyalty. In other words, the downside risk of a bad interaction vastly outweighs the upside potential of a great one. Making things easy does not create raving fans, but making things hard creates active detractors.
Customers who had high-effort experiences were 81% more likely to spread negative word-of-mouth compared to those with low-effort experiences. In an era where a single negative review is visible to thousands of potential customers, this amplification effect is consequential.
CES is a stronger predictor of repurchase behavior than CSAT. Customers who report low effort are 94% more likely to repurchase and 88% more likely to increase spending. Effort reduction does not just prevent disloyalty---it actively drives economic value.
Each of the three major CX metrics has a specific purpose. Understanding their differences helps you deploy them effectively rather than using them interchangeably.
What it measures: Overall loyalty and likelihood to recommend.
The question: βOn a scale of 0-10, how likely are you to recommend [Company] to a friend or colleague?β
Best for: Measuring overall brand health, tracking long-term relationship trends, benchmarking against competitors.
Limitations: NPS is a relationship metric, not a transactional one. It tells you the overall health of your customer base but does not pinpoint specific friction points. A customer might give you a 9 on NPS while still experiencing frustrating individual interactions---the overall relationship is strong enough to absorb them.
When to use it: Periodically (quarterly or monthly) to track the trajectory of customer loyalty across your entire base. Not tied to specific interactions.
What it measures: Satisfaction with a specific interaction or experience.
The question: βHow satisfied were you with [specific experience]?β
Best for: Evaluating specific touchpoints, products, or interactions. Comparing satisfaction across different service channels or locations.
Limitations: CSAT measures how the customer feels right now, but it is a weak predictor of future behavior. A customer can be satisfied with todayβs interaction and still switch to a competitor next month. CSAT also suffers from a ceiling effect---most responses cluster at 4 and 5 on a 5-point scale, making it hard to differentiate between βfineβ and βgreat.β
When to use it: Immediately after specific interactions or experiences where you want to evaluate quality.
What it measures: Ease of a specific interaction or process.
The question: βTo what extent do you agree: [Company] made it easy for me to handle my issue.β
Best for: Identifying friction points, predicting churn risk, evaluating process efficiency, measuring the effectiveness of self-service channels.
Limitations: CES is interaction-specific and does not capture overall brand sentiment. A customer might find every individual interaction easy but still have lukewarm feelings about the brand overall. CES also does not measure emotional engagement---a transaction can be effortless but unremarkable.
When to use it: After support interactions, transactions, onboarding processes, or any touchpoint where effort is a factor.
The most effective CX measurement programs use all three metrics in complementary roles:
This three-layer approach gives you a complete picture: are customers loyal (NPS), are they satisfied with specific interactions (CSAT), and are those interactions easy (CES)?
The effectiveness of CES depends on how and when you ask the question. Poor implementation produces noisy data that obscures rather than illuminates.
The question format: Use the agree/disagree format on a 7-point scale. This is the most validated version and produces the most reliable data.
βTo what extent do you agree with the following statement: [Company] made it easy for me to [specific action].β
1 = Strongly Disagree β¦ 7 = Strongly Agree
Keep it specific: Replace β[specific action]β with the actual activity. βMade it easy for me to resolve my issueβ is better than βmade it easy for me.β βMade it easy for me to complete my purchaseβ is better still. Specificity produces actionable data.
Add one open-ended follow-up: After the rating, ask: βWhat could we have done to make this easier?β This qualitative data is where the actionable insights live. The number tells you there is a problem. The text tells you what the problem is.
Keep the survey short: CES surveys should be 2-3 questions maximum. The primary CES question, one follow-up, and optionally a CSAT question for comparison. Longer surveys increase abandonment and, ironically, increase the effort of giving feedback about effort.
CES should be triggered by specific events, not sent on a schedule. The ideal timing is immediately after the interaction you are measuring:
There are two common approaches to calculating a CES score:
Method 1: Average Score Sum all responses and divide by the number of responses. On a 7-point scale, anything above 5 generally indicates a low-effort experience.
Method 2: Percentage of Easy Responses Count the percentage of responses that scored 5, 6, or 7 (the βagreeβ side of the scale). This gives you a percentage that is easy to communicate: β78% of customers found it easy to resolve their issue.β
Method 2 is generally more useful for communication and benchmarking because it translates directly into a human-readable statement. β4 out of 5 customers found it easyβ is more intuitive than βour average CES is 5.4.β
CES benchmarks vary by industry and interaction type, but general guidelines include:
Collecting CES data is only valuable if you use it to find and fix friction. Here is a systematic approach.
Start by deploying CES surveys at every major customer touchpoint. This creates an βeffort mapβ of your customer journey that shows exactly where friction concentrates.
Common high-effort touchpoints include:
Not all friction points are equally important. Prioritize based on:
A friction point that affects 10% of customers with a CES of 2.5 is more urgent than one that affects 50% of customers with a CES of 5.0, because the severity of the experience matters more than the frequency for predicting disloyalty.
For each priority friction point, ask three diagnostic questions:
Is the effort necessary? Some effort is inherent to the process. But much of it exists because of outdated policies, poorly designed systems, or organizational silos. If the effort is not adding value for the customer, eliminate it.
Can technology reduce the effort? Self-service options, automation, AI-powered routing, and smart forms can dramatically reduce customer effort. A customer who can resolve their issue through a well-designed self-service tool expends far less effort than one who has to call in and wait on hold.
Is the effort front-loaded or distributed? Front-loaded effort (learning a new system, completing an onboarding process) is generally tolerable if it pays off in reduced effort later. Distributed effort (every interaction is hard) is a retention emergency.
When a customer reports a high-effort experience, treat it as a case that requires follow-up. The most effective approach:
This combines the immediate retention benefit of individual follow-up with the long-term benefit of systemic friction reduction.
Rather than waiting for customers to report high effort, use operational data to predict it. If a customer has been transferred between departments twice, their effort is already high regardless of whether they fill out a survey. If a support ticket has been open for more than 48 hours without resolution, effort is accumulating.
Build triggers based on these operational signals to proactively intervene before the customer reaches the point of frustration that drives disloyalty.
Aggregate CES scores hide important variation. Break your scores down by:
This segmented view reveals specific, actionable opportunities that aggregate scores obscure.
The most transformative use of CES is as a design tool for process improvement. When you are redesigning a customer-facing process---onboarding, support, purchasing, account management---use CES as the primary success metric.
Before the redesign, measure the CES of the current process. After the redesign, measure again. If CES improved, the redesign succeeded. If it did not, iterate. This approach keeps the focus on the customerβs experience of the process rather than internal metrics like handle time or cost per interaction that may or may not correlate with what the customer cares about.
CES is most powerful when it is not treated as a standalone metric but integrated into your broader feedback and measurement ecosystem.
Track CES alongside NPS and CSAT to build a three-dimensional view of your customer experience. Monitor effort trends over time to see whether your investments in process improvement are paying off. Share CES data with frontline teams so they understand which interactions customers find easy and which ones need work.
The goal is a culture where βWas that easy for the customer?β is a question that gets asked about every process, every policy, and every interaction design decision. When ease becomes a design principle rather than just a metric, the improvements compound across every touchpoint.
Your customers are not asking to be delighted. They are asking for it to be easy. The businesses that make things effortless earn loyalty that no amount of over-the-top service can match---because the easiest experience is the one customers come back to.
CustomerEcho tracks NPS, CSAT, and CES across every touchpoint---with AI analysis that identifies exactly where friction lives and case management that fixes it. Starting at $49/mo.