TL;DR ๐
The CSAT indicates how satisfied customers are at the moment (support ticket, delivery, payment, training, etc.).
Classic question: "On a scale of 1 to 5, how satisfied are you? " (or stars / smileys).
Formula: CSAT = (number of positive responses / total number of responses) ร 100."Positive" generally means "Satisfied" or "Very satisfied".
Read the CSAT over time and by segment rather than as an isolated figure.
Why we use it (and why you should too) ๐
At Edusign, we love metrics that keep us close to our customers. The CSAT helps you to:
Take the pulse right after a touchpoint.
Prioritize what to fix first (UX, delivery, support process).
Measure the impact of changes (new onboarding, new SLA, new copy).
Protect reputation by detecting issues before they escalate. ๐
Edusign Tip: always add a "Why? " open question to turn scores into actions.
A little story โ๏ธ
The 1-5 CSAT question you know today was popularized by the aviation industry in the 1970s. The companies wanted feedback just after landing, before passengers left the plane โ hence the small buttons/cards to tap on the way out. Tech then standardized it... few remember that it started on planes!
How to calculate it (with a quick example) ๐งฎ
Send a mini survey right after the interaction.
Count positive responses ("Satisfied" + "Very satisfied", or top boxes).
Apply the formula.
Example
1,000 responses โ 300 "Very satisfied", 400 "Satisfied", 100 "Neutral", 200 "Not satisfied".
โCSAT = (300 + 400) / 1,000 ร 100 = 70%
What's "good"?
> 80% = very solid
~70%+ = good
< 50% = act fast to correct the experience
(As always: context matters โ follow your trend and segments.)
Where and how to ask ๐บ๏ธ
Channels: email, in-app, SMS, or... via the Edusign app.
โTiming: as close to the event as possible for fresher memory.
Formats:
Yes/No ("Were you satisfied? ")
Likert Scale (Very unsatisfied โ Very satisfied)
Stars or smileys
Scales 1-3, 1-5, or 1-10
Rounding: report the CSAT in whole % (and, if useful, display the average to 1 decimal).
Best practices that really help โ
Pair with an open-ended "Why? " to capture the reason behind the score.
Segment before concluding (store A vs B, new vs existing, offer type, region, device).
Maintain a reasonable cadence (transactional at the event; a light periodic pulse check if needed).
Sample size matters โ avoid big decisions on a tiny base.
Tone & UX: fast, human, on-brand. A 10-second survey beats a novel. โฑ๏ธ
From measurement to action (the real subject) ๐ฏ
Very unsatisfied / Unsatisfied: respond quickly (ideally < 48 h), thank, understand, correct, then get back to them.
Neutral: ask "What could make this great? " โ often small UX or communication gains.
Very satisfied: say thank you, invite reviews/referrals, or offer early access to new features.
Edusign Tip: each CSAT drop opens an improvement ticket with a owner and a deadline. Follow-up matters more than the number. ๐
Counterintuitive truths about the CSAT ๐
A rising CSAT can hide churn: if demanding customers leave, your average risesโฆ while the value decreases.
A nearly perfect CSAT is suspicious: large samples + 98โ100% โ likely collection bias or filtering.
Channel changes the score: email > phone; chat may score lower despite faster replies.
Timing biases: right after a goodwill gesture โ inflated; right after an incident โ deflated.
Small UX improvements may not move the CSAT: what people don't notice won't register.
Formulation frames the response: "Are you satisfied? " yields more extremes than "How would you rate your experience? "
Mood and weather count โ really. ๐ค๏ธ
Strengths & limitations ๐
Why teams like CSAT
Simple to deploy & calculate.
High response rates (short & familiar).
Versatile across all touchpoints.
Actionable almost in real time.
Valuable (good scores instill confidence).
To keep in mind
Explains little on its own โ needs comments.
Averages mask realities โ segment by channel, persona, store, journey stage.
It's momentary, not loyalty: combine with NPS (recommendation) and CES (effort).
Read your CSAT like a pro ๐
General view: share good scores; mobilize teams when < 50%.
By segment: a 80% overall may hide a 98% in one store and 45% in another โ fix the right one.
Individual alerts: auto-flag 1/5 and 5/5 to trigger recovery or advocacy paths.
Improving the CSAT โ an honest plan ๐งญ
Locate the issue: global vs. segment? device, language, SKU, carrier, agent?
Mine internal signals: support logs, sales notes, ops incidents, product analytics.
Ask customers directly: prioritize the 1โ3 fixes (within budget).
Ship & tell: deploy changes and communicate that you've listened โ just that can lift the CSAT.
Re-measure and close the loop.
Ready-made templates ๐งฉ
Main question:
"On a scale of 1 to 5, how satisfied are you with [this experience]? "
Open question:
"What is the main reason for your score? (one or two sentences are enough) "
Thank you note (Very satisfied):
"Thank you for the 5/5! You've brightened our day. Would you agree to leave a quick public review or test our upcoming features in advance? " ๐
Re-engagement (Unsatisfied):
"Thank you for your feedback โ we want to fix this. Could you tell us a bit more? A team member will get back to you within 24โ48 hours. "
Fun facts (shine in meetings โจ)
The aviation companies of the 70s popularized the CSAT 1โ5 format with buttons/cards on exit after landing.
Smiley interfaces often boost response rates... but can skew scores upwards. ๐
Morning surveys tend to score better than those at the end of the day.
Cultural norms influence the use of the top box (some cultures avoid the absolute maximum).
CSAT vs NPS vs CES ๐งฉ
CSAT = satisfaction at the moment (after this interaction).
NPS = likelihood to recommend (relationship & loyalty).
CES = effort to accomplish a task (predicts future behaviors).
โUse them together for a richer view.
In a word ๐งก
The CSAT is a conversation starter. Ask it often (and equitably), read the why, act quickly, and watch your curve take the right direction.
At Edusign, it's one of our simplest compasses to stay close to you; and improve continuously. ๐

