Measuring the Effectiveness of Agent Training

blog feature image
Published:  October 30, 2012

Just because agents get through training doesn’t mean that training gets through to agents. Training in the contact center isn’t about agents merely completing sessions, modules and exercises; it’s about them gaining and utilizing new skills and knowledge that undoubtedly drive high performance.

Without viable ways to measure the efficacy of agent training, contact centers risk wasting valuable time and resources – and losing invaluable customers. Yet many centers still “train for training’s sake”, with little attention paid to the impact said training has on agent performance and sanity, service levels, revenue and customer satisfaction.

How exactly do the most successful contact centers go about measuring training effectiveness? Following are of some of the best practices I’ve seen embraced by true “learning organizations” in the customer care arena.

Written training tests. To ensure that key information provided during training sinks in, top contact centers develop written tests/assessments and administer them to all agents. Many centers actually have trainees take such tests before actual training is provided, as this helps to measure base-level knowledge and proficiency prior to training. Similar tests are then administered just after training is provided – to measure training comprehension and initial skill/knowledge absorption. Then, weeks or even months later, agents are assessed again to measure information retention and to make sure the constant barrage of customer insults hasn’t damaged their brain.

On-the-job training assessments. OTJ training assessments are performance evaluations designed to measure the application of specific skills and knowledge that agents learned during training. As with written tests, many contact centers first conduct such assessments (via role-play or simulation exercises) prior to any training taking place, thus gauging base-level skills. The real OTJ assessments are carried after agents have received training. Here, agent performance is evaluated during interactions with actual rather than imaginary customers. Periodic OTJ assessments may follow to ensure that agents are still applying the skills/knowledge in question months down the road, assuming the Marketing department hasn’t stolen said agents from the contact center yet.

Agent feedback. Measuring training success isn’t all about tests and assessments. Conversations with agents themselves can be just as valuable, if not more so, than looking at post-training grades and scores. Soliciting agent feedback after training often sheds ample light on why certain elements of training fail while others fail even more.

Smart contact centers ask agents which training programs and delivery methods they found most useful and engaging, which they found to be superfluous, and which ones made them want to hurt themselves or others. These centers don’t just collect agent input, they act on it – with managers making improvements based on agents’ suggestion, then taking credit for it during meetings with execs.

Customer feedback. Agents aren’t the only people contact centers should be paying attention to when it comes to gauging training success. Out of the mouths of customers often come comments that can tell you a lot about how much a recent training initiative bombed.

Via C-Sat surveys, call recordings and analytics tools, savvy managers keep an eye/ear on customer sentiment in areas for which agents have recently received training. For instance, if an agent who has just completed a module on Phone Etiquette receives numerous customer comments regarding how rude and abrupt the agent was on the call, it’s safe to say the training was highly ineffective. That said, it could also simply be that the agent in question is a sociopath, in which case he or she should be moved into the IT department immediately.

 

Categories

Archive