Here we go again. To “Kirkpatrick” or not to “Kirkpatrick”, that is the question.
To be, or not to be, that is the question—
Whether ’tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune,
Or to take Arms against a Sea of troubles,
And by opposing, end them? To die, to sleep—
William Shakespeare, Hamlet in Act III, scene i
Many a person has debated the Kirkpatrick evaluation taxonomy. To name a few:
- Dan Pontefract: Dear Kirkpatrick’s: You Still Don’t Get It (a personal favorite)
- Jane Bozarth: Alternatives to Kirkpatrick
- Roger Chevalier, CPT: Evaluation, The Link Between Learning and Performance
- Donald Clark: Using Kirkpatrick’s Four Levels to Create and Evaluate Informal and Social Learning Process
- And right now, a hot and heavy debate between Clark Quinn and Will Thalheimer: Kirkpatrick Model Good or Bad? The Epic Mega Battle!
So here I am poised to have the discussion again. But with a twist – ah, you knew a twist was afoot didn’t you? Anecdotally, I find the Kirkpatrick “model” puts people in one of three camps.
- I use the words “Kirkpatrick” but don’t use the taxonomy per se.
- I have heard of Kirkpatrick, agree with it in theory, and want to use the different levels but have no idea as to how/where to begin beyond “Smile Sheets”.
- No idea what you are talking about. Let me Google it and come back to you. (Fine, we’ll wait)
But there is a quiet fourth camp and it’s this group of people I wish to address. In this camp, whether or not you use the “Kirkpatrick” levels or believe in its link to “Learning Performance” doesn’t matter.
Camp #4) My organization doesn’t require anything beyond “Smile” sheets so that is all we measure.
A portion of the debate happening between Clark and Will (and one they both can agree upon) is the Learning Industry, as a whole, has an accountability problem. We cannot point the finger at the leadership of an organization and say, “They don’t ask me for it – so I don’t have to provide it.” or “All they are asking for are smile sheets and butts in seats numbers – so that’s all I give them.” Then act all shocked and surprised when leadership says training doesn’t add value.
There has been plenty of research stating there is no causal link between “Level One” and “Level Two” learning; meaning there doesn’t have to be a positive reaction to learning for learning to have taken place. Therefore providing leadership with just smile sheets is the equivalent of having them (Spoiler Alert) watch only Star Wars I: The Phantom Menace, and expecting them to understand that Anakin Skywalker turns out to be the bad guy. They don’t know the whole story.
Sorry people, our jobs just doesn’t work that way.
If we want a seat at the table, we have to take accountability and responsibility for our position and the end results it subsequently produces. Back in the day when I was a corporate L&D person – I would want my performance review to be based on the same criteria as other operational leadership. You have to take the good with bad – credit with the blame. I know, the argument to this is always – organizations are quick to say “training” fails and assess blame, but are unwilling to give credit when “training” is successful. To this I have good news and bad news.
The bad news: That will never change and if you think L&D is the only department to experience this type of blame assessment, you are mistaken. When profits fall it’s usually sales that takes the hit. (Forget it could be the company has a lousy product) or when workers comp claims goes up it’s usually Risk Management or HR who have a target on their back (not that operations takes unnecessary/risky shortcuts) – see where I’m going here? Finger pointing and the blame game are as old as time. And regarding credit…well, no – L&D is never fully responsible for improved performance and never will be. Performance support/success requires a village. We cannot ever take full credit for performance improvement, behavior change, or whatever the flavor of the day is for organizations.
The good news: You can do something about it – if you want to. This takes the form of measuring success. Regardless of the taxonomy, tool, or method you use – some measurement process is required. I also want to go on record stating it’s a cop-out to say that your organization doesn’t require deeper measuring so you don’t do it. That may seem harsh, and I don’t mean to harsh your buzz, but it’s true. It is our responsibility to show organizations a better way to measure performance improvement.
Let’s not get all L&D geeky on people. Speak the language of your organization. How about writing “Performance Objectives” rather than “Learning Objectives”; such as: “Within one month of completing this course the participant will put their project plan into action with project participants evaluating project success.” Those are course results that directly impact organizational success – that is if you have done your due diligence to ensure the learning aligns with business goals. It doesn’t matter what participants can regurgitate, it matters what they can actually do with said information. This is why solid measurement matters.
This is where fear gets in the way. We may not want to show those results, what if the student project fails? Is the failure the fault of training? It’s comfy here in the dark, where we know that the participants loved the class but the concepts didn’t stand a chance in hell of seeing success. No one wants to have that conversation with the boss. “Sorry boss, our ‘training idea’ didn’t work”. How do you prove what works (or not) without a supporting metric? You don’t, you can’t. It is therefore the responsibility (and obligation) for us to overthrow the current “only smile sheets required” mentality.
More good news: You don’t have get knee deep in Excel or your LMS system to measure success. You just need to be able to answer some key questions and in order to measure the success of any initiative, be it learning or improved kangaroo hopping, one must begin at the end.
Start here and please ditch the L&D vocabulary.
- What has happened in the business that now requires a course on Kangaroo Hopping? (Our competitors are using Super Hoppers to improve speed of delivery, we need to keep up.)
- How will we know our Kangaroo Hopping course will be successful for the business? (An improvement of 20% hopping length, within 60 days, will improve kangaroo package delivery speed.)
- How will the kangaroo elders know that 20% improvement has occurred? (The elders will be measuring hops via surprise hopping audits)
- After all this, are we sure we need a course on Kangaroo Hopping? (Perhaps we can teach stronger kangaroos to coach those that need help?)
From here we can create an assessment report, or evaluation process that really tells a story. In this case it’s not about, to use Kirkpatrick or not to use Kirkpatrick – the point is to use something that will measure performance results. Don’t settle for smile sheets just because that’s all the organization asks from you. I’m willing to bet that everyone reading goes above and beyond in other aspects of the job.
I know measuring performance can hard and perhaps scary but we need to do it. Why? Because saying it’s good enough, isn’t.
Related Post: My post supporting Dan: To be or not to be: The Kirkpatrick Question: https://learningrebels.com/2014/02/06/to-be-or-not-to-bethe-kirkpatrick-question/
Don’t miss a post, sign up and become part of the Learning “Rebel Alliance”!
You won’t get just normal, boring tips and ideas, but tips and ideas that make a difference! Start your “Rebellion” now!