How to Make Customer Satisfaction Surveys Fair for Agents

    Customer Satisfaction Surveys + Agent Fairness - Scorebuddy
    16:20

    Customer satisfaction surveys (CSATs) are incredibly helpful tools that measure how well your support team is doing. But when designed or interpreted poorly, they can feel more like discouragement for the agent, rather than a performance tool for growth and targeted coaching.

    Let’s be honest: A few “Yes/No” questions don’t always paint the full picture of a conversation and its place in the customer journey.

    For managers, ensuring these customer surveys are set up to be fair, accurate, and insightful is key to supporting a motivated, high-performing team. And the average response rate for customer satisfaction surveys is around 33%—so the responses you receive really do matter.

    However, CSAT survey designs can unintentionally skew results, overlook the context, and even misrepresent the agent’s skill. This is especially true since many responses come from those who have an opinion falling on the extreme sides of the spectrum, from terrible to amazing.

    To get the most accurate understanding of agent performance, you should focus on the full customer experience (CX), not just the final interaction or resolution. Survey questions should be precise yet clear, and aim to gather information on the agent’s service—not outcomes beyond the agent’s control.

    What is a customer satisfaction survey? 

    As the name aptly explains, a customer satisfaction survey is designed to measure the respondent’s happiness after an interaction with a support team agent. It’s best practice to complete satisfaction surveys immediately after the interaction, so it’s fresh in the mind.

    After a phone call, they may be asked to stay on the line to answer a few questions, or after the online chat is complete, a pop-up survey could appear on screen. This may involve open-ended, binary or multiple choice questions, depending on your survey methods.

    Typically, a CSAT score is collected by asking questions like:

    • How satisfied were you with your customer support experience today?
    • How helpful was the agent in resolving your issue?

    These customer satisfaction survey questions are simple and clear, and encourage a straightforward answer such as:

    • “Very disappointed” to “Very satisfied”
    • “1-5” on a number  or Likert scale
    • Thumbs up or thumbs down, “Yes/No”, or other simple binary questions and responses

    Customer service satisfaction survey responses are a fast, low-effort method of getting a pulse on the interaction. The respondent gets to share how they feel, and the business gets an immediate look into how they perceive the support they received from a specific agent.

    For agents, this metric can provide validation for their performance or show what they could be doing better. The rating can act as an early warning system before further problems arise, and on the flipside, shine the light on consistent, excellent agent support. 

    Unlike Net Promoter Score (NPS), which measures long-term brand loyalty and how likely the respondent is to recommend your product or service, CSAT is more instant. It’s a close-up snapshot of customer sentiment. By focusing on one moment in time, like a phone call, a chat, or an email thread, you can understand how well the interaction met customer expectations.

    Some satisfaction survey questions may try to gather more customer feedback from that specific respondent. These are typically qualitative, open-ended questions such as:

    • Is there anything we could’ve done better?
    • What is the reason behind your score?

    A blank text box or a drop-down box with pre-filled answers is provided for the customer to respond to these follow-up questions.

    Over time, CSAT patterns emerge and show what’s working well, or signal problem areas in the customer journey and highlight where improvements are needed. These customer sentiment trends can then inform everything from agent training to product feature updates.

    What’s the difference? CSAT vs. CES vs. NPS

    Metric

    Definition

    Sample Question

    When to Use

    Best For

    CSAT

    Measures short-term satisfaction after a specific interaction or transaction.

    “How satisfied were you with your support experience today?”

    Immediately after an interaction with support.

    Evaluating agent performance and support quality.

    CES

    Measures how easy or difficult it was for the respondent to resolve their issue.

    “How easy was it to get your issue resolved today?”

    After process-based interactions or problem resolution.

    Understanding friction points in the customer journey.

    NPS

    Measures long-term loyalty by asking how likely a customer is to recommend your company.

    “How likely are you to recommend us to a friend or colleague?”

    Periodically (quarterly, bi-annually, etc.)

    Gauging brand perception and long-term advocacy.

     

    Why customer satisfaction surveys can be unfair for agents

    There are clearly benefits to CSAT surveys, but the key is to maintain fairness and context when analyzing responses. Customer surveys are high-level and often answered in a rush. Think of the answers of “Yes” and “No” as being black and white, when the nuances of the entire interaction are actually a broad grayscale.

    Therefore, a low score shouldn’t be treated as the final word on an agent’s performance—especially if the issue was rooted in something outside their control. Unfair scores in certain situations can cause a morale drop amongst agents and, understandably, lower their motivation.

    6 factors beyond the agent’s control that skew CSAT

    #1. Customer support channels

    Some agents that work in a particular channel (like phone calls, social media, live chat, or email) may face frustrated consumers a higher percentage of the time. CSAT scores, therefore, might reflect the nature of the channel rather than the quality of the customer support.

    To illustrate this, SmartInsights found that SMS and phone call support had the lowest levels of customer satisfaction, with only 40-43% feeling satisfied post-interaction.

    #2. Customer intent

    Some interactions are easy wins. Answering a simple FAQ, helping with a password reset, or providing a delivery date don’t take much time or effort, but result in satisfied customers in a matter of minutes. However, other interactions may lead to escalations—like complex policies or product malfunctions—and are at risk for higher dissatisfaction.

    #3. Timing

    It’s reasonable to nudge a consumer to rate their service experience and send a follow-up email. But three follow-ups in a short span of time? This might lead to minor annoyance and trigger a low CSAT score, even if their irritation had nothing to do with the quality of support. Surveys sent too quickly (like a prompt to leave a rating mid-conversation, for example) or too late can skew CSAT scores.

    #4. Product issues

    Agents are not software engineers or manufacturers, yet callers faced with a software outage or a product malfunction may place their frustration on the agent. While a customer may know there is only so much an agent can do in situations like these, they often need a channel to vent their frustration and unfairly take it out on the agent.

    #5. Issue severity

    When a customer enters a conversation with an agent, they expect to leave the interaction with the problem solved in a timely manner. And when this doesn’t go the way they expected? Even if the agent engaged with empathy and did everything in their power, they will still likely leave a low CSAT score.

    Complex or unsolvable issues, especially those involving multiple transfers, drag scores down unfairly.

    #6. Customer mood

    Some people start the conversation frustrated and in a foul mood, biasing results before the interaction even begins. The most empathetic and well-trained agent will still likely receive low CSAT after interactions like these.

    Boost customer satisfaction scores: Coach your agents for tricky situations.

    Get your coaching plan template!

    3 examples of how skewed CSAT surveys impact agent performance

    While the goal of these customer experience surveys is to understand how to increase customer satisfaction, agents must also be considered. At the end of the day, skewed customer feedback can mislead managers about agent performance and damage support team morale. Here are just a few real-life situations to watch out for:

    • An agent handling billing disputes may consistently score lower than one answering FAQs.
    • A caller has been stuck waiting for the next available agent for 20 minutes and has already been transferred; by the time they reach the final agent, they’re irritated and impatient, no matter how helpful the agent is.
    • A delivery is late, and the buyer contacts support to ask why their package hasn’t arrived yet. The delay is due to a third-party carrier, yet the agent tracks the order, provides updates, and even proactively offers a goodwill credit. It’s still possible they will give a low customer satisfaction survey score due to the package not arriving when expected.

    Checklist: How to make CSAT fair for agents

    Now that you understand all the ways CSAT scores can be skewed and inaccurate, it’s time to make them as fair as possible for your hardworking agents. With the right customer satisfaction survey questions, you can find coaching opportunities, recognize top performers, and identify skill gaps.

    CSAT fairness checklist: 7 steps

    • Sample from different channels and intents
    • Ensure a minimum number of responses per agent
    • Normalize data by giving weightings to different sample areas
    • Avoid repetitive and narrow survey timing
    • Define how to handle edge cases and outliers 
    • Track raw vs. normalized scores for each agent
    • Review these criteria regularly

    1. Sample fairly across different channels and types of inquiries

    If you offer customer service support via phone, instant messaging, email, and social media, CSAT surveys should be presented in each of these channels.

    It’s more accurate to compare agent scores in the same channel. When comparing across channels, normalization of data (more on this in number 3) is required to account for more challenging conversations and escalations.

    The same is true for the intent of the inquiry. To paint a more complete picture of customer satisfaction, different types of inquiries should be analyzed separately when looking at raw CSAT data.

    • Don’t directly compare email-only agents with phone agents, who often face more urgent queries and frustration.
    • Those reaching out to the refund team or to membership support to cancel their subscription are more often already more frustrated than, say, those who simply need to change a password or have a product question. 

    2. Ensure a minimum number of responses per agent

    With a small sample of responses, one very high or low score can drastically affect an agent’s average CSAT.

    This leads to an inaccurate picture of performance and makes it appear that they are a top performer or underperformer, when this isn’t the reality. Setting a minimum number, such as 20-30 responses, for example, helps ensure you’re analyzing a pattern, not a small snapshot. 

    • Some channels might face lower CSAT response rates. With instant chats, for example, people are often rushed and won’t even leave a rating. A handful of responses won’t show the full picture, so it’s better to collect more over time before drawing conclusions.
    • Let’s say an agent only has five CSAT responses, and two are low scores; one is from a customer in a bad mood, and another who didn’t understand the company’s policy. These ratings can drag their overall average down significantly, even if the majority of their work is excellent. 

    3. Normalize scores with weighting

    Scores can be normalized by weighting, or assigning different values to the CSAT responses based on channels or issue type. This method brings more balance to comparing agent performance when harder-to-please scenarios are at play. 

    • With context, an 80% satisfaction rate on CSAT on escalations may be equal to or better than 90% on FAQs.

    4. Avoid repetitive or narrow survey timing

    To receive the most accurate customer feedback, it’s better to send the survey as soon as possible after an interaction. Sure, this is when customers might be more emotional after interactions that escalated, but waiting too long can cause them to forget about it altogether. 

    • If a customer receives an email survey a few days after their interaction with the agent, they may base their response on whether or not the resolution was solved and how long it took, rather than the quality of the support they received. 

    5. Define rules for edge cases and outliers

    There’s so much out of an agent’s control—and that’s why it’s important to define outliers and edge cases in CSAT responses. These are going to customer responses that are significantly higher or lower than the agent’s average scores, and should therefore be reviewed before they distort the metric.

    • Those who have repeat issues, multiple tickets open, or are passed to various agents have the opportunity to respond to multiple customer satisfaction surveys. Therefore, if there are multiple similar responses from the same person, especially if the ratings are low, these should be manually reviewed before adding to the agent’s average because they may not be reflecting the most recent interaction. 
    • Agents don’t write policies, but they are responsible for communicating them. If a low CSAT score was given to an agent after enforcing a strict policy, like denying a refund, this could be tagged as a “policy-related” outlier.
    • For follow-up customer feedback survey questions that allow for open-ended responses, emotionally charged words can be monitored and tracked by something like AI-powered quality assurance to review all conversations. While phrases like “disappointed” or “I didn’t appreciate” can be part of constructive feedback, words like “furious”, “stupid”, or “ridiculous” should flag a manual review, as these emotionally driven responses are often not the best representation of the agent’s support.

    6. Track raw vs. normalized scores

    There’s a place for analyzing raw and normalized CSAT scores. Both, when viewed together, give a fuller, fairer picture of agent and support team performance. Raw scores show how customers rated an agent overall, and normalized scores adjust for the context, such as support channel or issue type. 

    • To compare CSAT scores across teams, channels, and inquiries, the data should be normalized.
    • Raw data is great for tracking individual agent performance over time.

    7. Review criteria regularly

    CSAT surveys shouldn’t be at random, nor should the reviewing of them be. Consider sticking to a consistent schedule, whether it be monthly or quarterly, to stay on top of reviewing outliers and recognizing patterns before they escalate. Regular reviews also help catch ratings related to things like outdated policies and a change in customer expectations.

    • For example, expectations for response time may shift over time. What was acceptable a year ago might now feel too slow, even if the agent followed the previous standards.

    Book A Demo

    For targeted coaching and better CSAT, prioritize fairness and context

    Customer service satisfaction surveys are key to understanding the customer experience and supporting agent growth. However, they’re only accurate when analyzed in context.

    Without accounting for factors like channel type, outliers, sample size, and type of inquiries, scores can misrepresent the agent’s performance. To get the most value from satisfaction surveys, companies should design their evaluation processes with fairness in mind, in order to better support agent growth and team morale.

    Once you’re sure you’re following best practices for survey questions and fair analysis, you can start delivering targeted coaching sessions based on truly accurate CSAT data.


    Download the call center coaching plan to act on CSAT results

    Share

    Table of Contents

      FAQ: How to Make Customer Satisfaction Surveys Fair for Agents

      What’s a good CSAT score for contact centers?

      The industry standard for a good CSAT score for a contact center falls between 70%-85%, showing that a majority of respondents were happy with the customer service they received.

      -Above 85% is amazing: Review what’s working and keep doing it.
      -Below 70% means there is likely a skill gap: It’s time to review your service approach.

      Note that this number may vary by industry; an e-commerce support team might aim for 75-80%, while a FinTech support team would likely try to push for 85% or higher. 

      Should I use CSAT, CES, or NPS after support interactions?

      CSAT, CES, and NPS can be utilized altogether to provide a detailed view of agent performance. CSAT and customer effort score (CES) are more immediate and close up, and show if the customer experience with the agent was satisfying and low effort.

      Net Promoter Score (NPS) zooms out and gives insight into long-term customer loyalty. It can be reviewed, say, quarterly, and compared to CSAT and CES trends to understand how individual customer experiences impact brand loyalty over time.

      How many CSAT responses per agent are needed for a fair comparison?

      A common rule of thumb for CSAT is to have at least 20-30 responses for an agent before using the score for performance evaluation and drawing conclusions. This helps reduce the unfair impact of outliers and ensures that any customer feedback data reflects a reliable average.

      Generally, the more responses, the more accurate the overview is. When looking at CSAT responses for something like requiring additional training or giving a bonus, aim to analyze a higher number of responses, such as 50-100+.

      Do CSAT surveys provide product feedback and influence feature development?

      The primary focus of a CSAT survey is service quality but, often, responses to open-ended questions can provide valuable product feedback.

      Mentions of bugs, usability issues, missing product features, and more may pop up when respondents are rating their support experience. And, if you tag and review these responses, your product department and engineering teams might be able to identify recurring themes.

      They can then use this feedback to help steer future product development efforts and prioritize product offerings that directly address pain points and drive stronger CSAT results.

      What is the Likert scale for CSAT surveys?

      A five-point Likert scale question gives respondents a simple way to express their satisfaction or dissatisfaction. Likert scale questions typically offer response ranges such as:

      -“Very dissatisfied” to “Very satisfied”
      -1-5 (or another number range)
      -Or even a sad emoji to happy emoji scale

      Each point on the rating scale, whatever form it takes, represents the intensity of customer sentiment in the response. This helps capture more nuance beyond just “Yes/No” so you can perform a little sentiment analysis and better understand service quality.

      Share