Creating a quality assessment scorecard that suits your business is something not all Customer Support teams do. Some feel their team is too small, others do not see the value in assessing quality as a specific metric.
In my opinion, however, these scorecards are what make a support team reach new levels of high quality. If you want to change something for the better, you’ll want to measure it first.
While it sounds easy enough to put together a scorecard that allows you to say whether an interaction was good or not, in practise it’s not always that straight forward.
Why you want a scorecard
The name ‘scorecard’ is perhaps a little off-putting; it suggests that you’re keeping score, which in turn suggests that tracking quality is a competition. It really isn’t, and it’s a good idea to make it crystal clear what you goals are in keeping track. Your team will want to know that you aren’t looking to assign them a grade, but want to work together towards improving overall workflows.
A quality scorecard or rubric has two main goals:
- It takes the opinion out of the assessment; it’s no longer a team lead or a peer telling you that the interaction was good or bad. It becomes a template that everyone needs to follow. There will be less arguing about whether or not the assessment is fair, and more focus on what can be done to improve.
- Quality of work becomes a measurable data point. While we don’t want to over analyse all the work our agents do, it does give us the opportunity to coach them to bring their best work to the table.
The language you use around presenting these kinds of scorecards to your agents is important, especially if you have only just started implementing quality assessment. If you suspect that your agents will be hesitant about being scored, do call it a rubric rather than a scorecard.
Define your scope
Your first step towards creating a scorecard that works for you is to define what quality looks like for your business. Which elements are truly important to the work that your agents are doing?
You can’t track good quality if you don’t know what quality looks like for the business you are in, just as relying only on CSAT scores to determine if your team is doing a good job is setting yourself up for a nasty surprise in the long run.
Good questions to ask yourself before you start constructing a rubric are:
- What are the industry standards for high quality in my field?
- What do our customers expect from us? (Pro tip: ask your high performers, they’ll know!)
- What do we, as the company, expect from our agents?
Choose your approach
There is no single way of doing quality assessment for a Customer Support team. How you choose to implement your kind of scoring depends on what works best for you and your team.
One approach is to literally score interactions on a scale. Examples of that could be:
- A scale of 1 to 10, where 1 is very poor and 10 is outstanding.
- A score of 0 or 1, where 0 equals “not present” and 1 equals “present”.
The danger in using numeric scoring is the tendency for some people to game the system. If they know they need to hit a certain score, they’ll either put energy towards calculating what they need to do to do the bare minimum, or spend your energy by debating the score they’ve received.
Another reason why I’m not a big proponent of numeric scoring is that it’s not encouraging self-stimulated growth. It teaches the agent little about why you’ve scored the way you did, and chances are they’re going to look at it once and then forget what they need to be doing to get better.
Text based scoring
I’ve found that text-based scoring works best for peer-to-peer reviews. Rather than having a team offer quality assessments, you have agents within the same team review each other’s work.
This leads to a couple of well-worth the effort results:
Team members learn from each other. Whether it’s product knowledge, a way of phrasing an answer, or a policy that was learning point – there’s always something you can learn from your colleagues.
The reviewer needs to think about quality specifically. In order to offer a review, they’ll need to be aware of what your company’s standards for quality. This is knowledge they’ll take back to their own work, killing two birds with one stone.
Choose your platform
While there is no absolute need to choose a platform over simple spreadsheets to keep track, working with a team that is as dedicated to high quality work as you are does have huge benefits.
To start with, you will not have to reinvent the wheel when it comes to analysis. You will have ready-made reports at your beck and call whenever you need them.
I won’t go into detail about the particular platforms that are available in this post, but a few of the names I’m familiar with are:
Review your process over time
Don’t ‘set and forget’ – the rubric you set up should be as flexible as you expect your team to be. Aim for reviewing your rubric or scorecard at least once a quarter.
One of the important things to listen to is your team’s opinion. Do they think the scoring is fair, or true to the work they’re doing? Do they feel anything needs scrapping, or are they missing elements to the rubric? They’re doing the work, so they know – in particular your high performers.
What does your scorecard look like?
If your business is using a quality scorecard to asses the team’s work, what is one thing that you think will work for others? Leave a comment for others to learn from!