blog
banner

Instructional Design Element 5: Evaluation

🕑 5 minutes read | Jul 30 2024 | By Richard Head, TTA Learning Consultant
banner
blog

There are two types of evaluation processes: Formative and Summative.

When most of us think about an evaluation, we’re thinking about a summative evaluation. We’re thinking about a summary of what worked and why, what didn’t and why, and what the recommendations might be for future activity.

A formative evaluation also gets at the kind of summative (final) evaluation you want to conduct, but it asks those questions in the early stages of a project. The final evaluation of a training initiative is all well and good, but if you haven’t thought about and defined early in the project the kinds of things you want to measure and how you’ll measure them, your final evaluation could be a disaster. While we didn’t talk in detail about the formative evaluation process during the Analysis and Design stages, that’s where you’ll want to have those conversations. In many cases, your team has those conversations as you define learning needs, and learning and performance objectives. The recommendation is to spend time documenting those early discussions so that you already have the outline of a final project evaluation.

A Simple Evaluation Rubric

Back in the day when I was fresh out of graduate school, I was responsible for a local initiative as part of a federal demonstration project. When it came time for the project to be evaluated, the contract evaluation firm did a “quick and dirty,” high-level evaluation before launching into a more rigorous evaluation effort that we’ll describe below.

The simple evaluation consisted of four measures: Process, Performance, Effectiveness, and Change.

  1. Process: This describes the overall change initiative and asks whether you followed that process.
  2. Performance: Did you do what you said you were going to do? Did you follow the process you set out in the beginning, or was it changed along the way? There are times when your best early efforts need to change based on feedback. If the process changed, did that have an impact on the overall project? How so? How do you know?
  3. Effectiveness: Did your process and performance produce the results you sought? How was effectiveness defined as part of the process, and was there success?
  4. Change: Did the results you produced really change anything in the long run? This is a key question because there are times when “the operation was successful but the patient died.” Yes, your project produced the results you said it would, but nothing much really changed as a result of your hard work.

So then, how do we do a better job of not only evaluating the training intervention but also evaluating whether the entire project was a success? In an earlier post, I talked about “ROI from training” being a short-sighted measure because training is an input, not an outcome. Training is designed to produce behavior change that happens back on the job, not just in the classroom.

Kirkpatrick’s Evaluation “Levels” of Measurement

Don Kirkpatrick was a professor at the University of Wisconsin and a past president of the American Society for Training and Development (now known as the Association for Talent Development—ATD). He created what’s now known as the Kirkpatrick Model of evaluation, which has four levels.

The four evaluations are simple but profound:

Level 1: Reaction: This is whether the learners found the training likable, engaging, relevant, etc. Also sometimes known as “smile sheets,” this level gives some basic information about the training event(s) but not the outcomes.

Level 2: Learning: This level measures what the learners learned, usually through some sort of final exam or assessment of new abilities. This level looks at knowledge, skills, and attitude.

This is where evaluation gets interesting because now we get at what’s really important: behavior change on the job and final results. Can the learners do something different on the job as a result of the training they received and did it produce the results you sought?

Level 3: Behavior Change: This measures the behavior or performance change of learners outside of the classroom, back on the job.

Level 4: Results: This measures whether all of your hard work produced the results you said it would. Did on-the-job behavior change actually produce the results you hoped for?

Is There a Level 5?

There has been discussion over the years of yet another measure that goes beyond Kirkpatrick’s model. A “Level 5” measure asks whether there was any fundamental change in the business as a result of the change initiative that your training was part of. It could be that your training was a wild success, learners could perform differently and better on the job, and early business results were encouraging, only to find that, 6-12 months later, things were no better than before the change effort.

Level 5 aside, the above paragraph gets at a key point: Only managers can ensure business results and behavior change. Trainers can work with managers following the training to assist with implementation, but it’s up to the manager to make sure employees have the kinds of support and reinforcements (incentives, systems, tools, etc.) to make sure the training “sticks,” and produces the intended results.

John Kotter, a prolific author and one of the “gurus” of change management, was famous for saying something about “never underestimating the power of the corporate culture to kill the change.” By that, he meant that it’s very easy for things to go back to the way they were before the change. Change is hard, it takes constant vigilance on everyone’s part to make it a success, and the easy path is to go back to the old ways because they were familiar and comfortable.

As I also mentioned in a previous post, “What’s the ROI from Training?” is the Wrong Question, and it’s crucial for management to be involved:

In his book, “What Every Manager Should Know About Training, or ‘I’ve Got a Training Problem’ and Other Odd Ideas,’” Robert Mager, one of the seminal thinkers in the learning and development field, said that trainers can only guarantee skill and confidence, not on-the-job performance. Managers, not trainers, must be held accountable for OTJ performance using the new skills.

Mager said that “…although trainers can provide skills and self-confidence, only you (the manager) can provide the opportunity to perform, and only you can provide an environment that encourages and supports performance….” Every L&D professional should read that book, along with others in Mager’s L&D series referenced at the end of this post.

Sure, we can measure what learners can do differently at the end of a training event. But whether learners can perform differently back on the job is not a function of training; it’s a function of management.

Evaluation points to the future

Your final evaluation should look at not just what the training initiative did for learners. It measures what management did to support the learners in their new performance. It then considers what further efforts are needed to produce the long-lasting change that will propel the organization to higher performance.


See the entire Instructional Design Elements blog series

Resources

Leave a Reply

Your email address will not be published. Required fields are marked *