In the current business climate, where every dollar spent needs to be justified, proving ROI on your internal training programs is more important than ever. It’s not enough to show that your program is helping employees learn new skills; you need to prove that it’s having a positive effect on the company’s bottom line.
But how do you put a dollar amount on a program whose effects seem largely unquantifiable? The struggle is real: this LinkedIn survey found that demonstrating ROI is a major challenge for talent development teams.
We want to help. We’re going to walk you through three of the most widely-used frameworks for evaluating the impact of your training programs. Using one or a combination of all three of these models will help you prove the value of your programs to the people who hold the purse strings.
The Kirkpatrick Model is one of the most commonly used methods for evaluating the effectiveness of training programs. You can use it to connect numerical indicators of ROI, like sales numbers or retention rates, directly back to the skills learned during training.
It’s not enough to simply tell your superiors that sales increased after training; you need to prove that the increase in sales was the result of your training. With the Kirkpatrick Model, you can create a chain of evidence—a path that shows exactly how the training you instituted led to an increase in sales volume.
The model uses four steps, or levels of analysis, to trace how learning leads to actionable results. Compile data at each of these points:
Before you begin training, use the Kirkpatrick Model to map out the result you hope to see from your training program. Start at Level 4, then work backward to identify the steps that would be required to get there.
Let’s say we want to prove the ROI of a sales training program designed to help call center employees increase average order value (i.e., upsell customers). Before the course launches, we sit down and map out our model for evaluating the program:
Now you’re ready to move forward with your training program, collecting the relevant data as you go. Later, when you discover that since the training cart sizes have increased by an average of 10%, you can use the data you’ve collected to clearly show that the sales training was directly responsible.
A major shortcoming of the Kirkpatrick Model is that it stops just short of providing a true training ROI cost-benefit analysis. You can show that the training produced measurable results, but how does that stack up against the costs of running the program? To fix that, we can look to the Phillips Model, which builds on Kirkpatrick’s framework with a fifth level: ROI.
The five levels of the Phillips Model are based on Kirkpatrick’s Model, with a few tweaks meant to create more data, and context around that data, to ultimately help determine ROI.
To calculate training ROI, you would collect the data for levels 1-4, creating your chain of evidence, just like in the Kirkpatrick Model. The only major difference is that, instead of calculating one number for Level 4, you would attempt to capture impact figures for a variety of metrics and would convert them into a dollar value.
So, for our sales training example from the first section, let’s say we were able to connect our training program with a 10% increase in average order value across the company, translating to a $200,000 increase in sales over the next year.
That’s great. But impacts can also be negative. How much did it cost to run the training? Consider these factors:
So, in this scenario, our total cost was $17,500, and our total benefit was $200,000.
The equation for calculating ROI is quite simple:
So our ROI is 1040%. A pretty impressive number that shows we got a healthy return on the money we spent.
Remember that when it comes to calculating ROI, timing is everything. Don’t wait until the program is completed to start calculations or you may find that your ROI is smaller than expected or nonexistent. Instead, do the math at intervals throughout the training process so you can adjust training or implementation to make sure you’re getting maximum value.
Data backed doesn’t always mean hard numbers. While the two methods above focus on gathering quantitative data that proves learning effectiveness, Brinkerhoff’s Method aims to gather qualitative evidence. Instead of calculating a dollar value for training, you’ll create compelling examples and case studies to help sway decision-makers toward favoring your programs.
Because Brinkerhoff’s Method focuses less on proof of learning and more on the actual impact of education, you don’t have to spend so much time compiling statistics and tying actions to outcomes. There is no guesswork for isolating variables because we’re only looking at the outcome.
The Success Case Method is a simple way to combine program analysis with case studies and storytelling. It works like this:
So, to evaluate our sales training, we would start by picking our goal metric. Let’s stick with our example of average order size per call. While the average increase in cart size was 10%, five salespeople went above and beyond and boosted their numbers by upwards of 20%. There were also five salespeople who showed no improvement or even showed a drop in sales figures over that period.
So we sit down and hash it out with both sets of people. We ask them for their thoughts on the training, asked them what worked and what didn’t, and then asked them how that has been affecting their behavior.
From the top performers, we learn what they found most effective about the sales training and how it helped boost their performance. We come away with some impressive firsthand stories on how useful training was.
From the poor performer, we learn that they were confused by some of the sales techniques or that they are implementing those techniques incorrectly. From that, we take away some concrete ideas for improving the program next time around. We pair that knowledge with analytics data pulled directly from the training program to see exactly where users struggled during training.
Putting all of that information together, we can tell a persuasive story of how our training is instilling confidence and knowledge in our top salespeople, as well as provide some concrete ideas for how to make our next round of training even more effective. When presented with confidence, these stories can be just as persuasive as numbers in a spreadsheet.
Still not sure how to best demonstrate the monetary benefits of quality training? We’ve developed our very own ROI calculator to help your L&D team forecast potential ROI. Simply plug in your numbers and we’ll do the math for you. We use this calculator internally, but for the first time, we’re making it available for anyone to use to calculate the impact of your training in various use cases, including:
Editor's note: If nothing happens when you click on the links or the banner, you might need to turn off your ad blocker to allow our bot to pop up. :)
Our ROI calculator is closely related to Level 5 of the Phillips Model of Evaluation, in that it will help you calculate a dollar-based ROI number. You can then use any of these three models to tie those numbers back to your learning initiatives.
Additionally, if you use Salesforce, 360 Learning has an integration that can help you measure ROI by drawing direct correlations between training and performance data, such as revenue or service tickets.
Which of these three methods is the most effective? That depends on your company, and the kind of data your decision-makers respond best to. Some may find personal narrative more compelling, whereas others require hard data to fuel their decisions. In most cases, a combination of both qualitative and quantitative data will be most persuasive.
When it comes to demonstrating ROI, it’s important to be proactive. By the time higher-ups are asking to see numbers, your program may already be on the chopping block. You should be measuring the effectiveness of your programs constantly and showing value whenever possible. That way, you can maintain company buy-in and keep your programs running.