Free On-demand Webinar
Busy supporting customers? Not anymore.
Watch hands-on webinar on workflows and easily automate your work in just five minutes!
Data shows that 99% of customer service experts believe that customer expectations are higher than ever. To match these expectations, you need a game plan. Customer service quality assurance is the practice of measuring quality through conversation reviews, then improving (or maintaining) performance with the results. Any ambitious support team should plan out its QA program.
Most customer service teams track performance metrics to understand how their team is operating. The customer satisfaction score (CSAT) measures customer satisfaction, the first response time (FRT) measures speed, the net promoter score (NPS) measures customer loyalty, and so on. But metrics of this kind don’t pertain to your customer service quality.
Teams are increasingly tracking their Internal Quality Score (IQS).
This metric is different from other support metrics because it evaluates how your team performs according to company standards. A customer may be satisfied with a conversation, but that doesn’t mean that the agent’s knowledge was accurate or that they followed protocol. Response times may be fast, but that doesn’t mean that the agent thoroughly solved the customer’s issue.
Similarly, if something goes wrong, these metrics often don’t point to why. Was it the fault of your customer service team? Was their gripe with the product itself? Were they unhappy with the communication channel?
To give the best feedback to your teams, you need to know the details. To understand what your customer wants, you need to know the details, and reviewing the right conversations can give you the insights you need.
A customer service QA program is the way to find out your IQS. If you conduct regular conversation reviews and use the results to provide actionable feedback to your team, your agents will measurably improve according to internal standards. This will subsequently affect your CSAT, CES, FRT, and so on.
Let’s walk through the setup to find out how it works.
The first question you need to tackle is who should do the reviewing. Your goals and support team structure determines which format is the golden ticket for your company. For example, if you have a large team and want to improve overall support processes, then having a dedicated QA Specialist is crucial. However, if you’re a smaller team more intent on sharing knowledge, you’ll want to look into peer reviews.
There are four distinct review formats in order of popularity:
Over half of the teams who conduct reviews use this format. Logically, the responsibility for team performance usually falls on managers or team leads. It helps a manager understand the strengths and weaknesses within their team and the processes in a broader context.
Pros: regular feedback, aligned scores (only one reviewer).
Cons: detracts from managerial responsibilities.
If your team is large enough, and lucky enough, to have a QA specialist — this is the golden format. Having a specialist dedicated to measuring and improving quality makes it a top priority company-wide and not a task that falls down on the to-do list of other responsibilities.
Pros: expert analysis of trends and detailed reporting; doesn’t detract from other responsibilities.
Cons: not an option for smaller teams.
If you have set team-wide targets and can conduct regular calibration sessions to align your whole team on support goals, this is an excellent format through which shared knowledge promotes high standards.
Pros: creates an open, collaborative feedback culture; time-saving.
Cons: variable scores; reviewers are less likely to score negatively.
This is the best way to encourage professional growth. It’s very rare for teams to adopt this as the only QA review format, however, it’s perfect for occasional performance reviews.
Pros: excellent for personal growth.
Cons: ineffective as a solo format.
Some teams incorporate a combination of several formats, as each serves the team differently. For example, a manager may conduct reviews during periodical assessments of team member performance; while a QA specialist will review continuously to provide a higher analysis of support processes.
Agent performance is directly linked with the success of the team. Regular feedback is also invaluable for encouraging strong relations between management and employees – motivating them to do their best.
A customer service scorecard is a form filled out for each review that rates agents on how well or how poorly they handled customer service interactions.
Through scorecards, your feedback is measurable, concrete, and uniform, so that you can hold agents accountable or pinpoint weak processes to create improvement plans. Ideally, you use customer service QA software that gives you fully customizable and flexible scorecards. For example, you may wish to create a different scorecard per team, or per workspace.
To create a scorecard, you need to determine your categories, meaning what matters to your team and what your goals are. Here are some common rating categories (for inspiration):
Empathy
Product knowledge
Grammar
Communication
Tagging
You can include as many categories as you like and want to provide thorough feedback, but be aware that there is niceness in conciseness. In other words, having to score conversations across 10+ categories is very resource-intensive, and few teams have the time and patience to do it.
Also, there are different rating scales. You can choose to grade each category with a simple thumbs up/thumbs down, opt for scoring out of ten, or anything in between. A more granular rating scale provides more detailed feedback, however, it requires calibration sessions among reviewers to align scoring. The right answer is what works for your team.
Customers judge every interaction, so every interaction counts. That doesn’t mean you have to review a high percentage of conversations. Many choose to review at random, while others simply opt for the most recent interactions. I recommend being a little more scientific in your method of picking conversations to review.
Follow the principle of reviewing smarter, not harder. If you want to make your review strategy as productive as possible, it requires you to think about which conversations are most valuable to review.
Example #1
A customer contacts your support team with a relatively common complaint.
The support agent easily understands and fixes the issue.
The customer leaves happy. Problem solved.
Example #2
A customer contacts your support team annoyed.
The issue they encountered is rare: it’s not a problem that the agent has come across previously.
The support agent requires internal help from a support engineer, which entails a much longer exchange.
The second example is a much more worthy conversation to review. It showcases potential product hiccups, how well internal processes are working, and agent performance. In complex conversations, there are learning potentials whether the customer walks away happy or dissatisfied.
Setting up a QA program is only half the battle. The aim of quality assurance is, of course, to improve quality. This is an ongoing process as your product evolves, your team changes, customer expectations increase, etc.
If your program runs effectively, reviews will identify areas of weakness within the support team. The resulting feedback should influence training and coaching initiatives — on an individual and team basis. Employees who are given regular training are more engaged, and companies who invest in training see 24% higher profits than those who don’t.
Reviews will likely also identify areas of weakness outside the support team. With effective reporting, feedback can influence product and development team efforts, making them more customer-centric. To close this feedback loop, you need the right tool to help you join the dots.
Setting up a QA program can theoretically be done manually. This requires someone to:
Create and maintain spreadsheet/s to track scores.
Search through conversations to find the best fit for your purposes.
Create reports from scratch to track metrics.
Ensure agents, reviewers, and managers stay on track.
However, this is clunky, time-consuming, and in no way scalable.
The right tool will integrate with your ticketing system to centralize the entire customer service assessment process. An effective QA support tool should provide natural language processing (NLP) to locate the most complex conversations and segment them by customer sentiment. In addition, it must enable data analytics, such as interactive visualizations, to understand the bigger picture of the conversation performance and drill down on the metrics. Last but not least, the QA system must bring automation to QA to assign specific conversations to specific reviewers.
The most widely used metrics in support teams is the Customer Satisfaction Score (CSAT). And, while measuring and tracking how satisfied customers are is essential, it doesn’t show the full picture. Not enough teams look at the wider causes of poor support quality and therefore miss out on the opportunities to excel.
Expectations are an ever-moving target, so your team and approach must continuously adapt to meet them.
Customer service quality assurance, through conversation reviews, help you understand the bigger picture of performance and metrics. Reviewing, tracking metrics, analyzing the data, and feeding back guidance to your team is an ongoing process. But it is one that will pay off through higher customer satisfaction
Author:
Grace Cartwright
Grace is Klaus’ content specialist, which means writing about everything customer service – with a few cat puns thrown in for good measure. She is usually in either Prague or Edinburgh, but often tries to make it to warmer climes with her dog and baby in tow!
Try HelpDesk for free
For quick and intuitive tickets management