How you measure training success might be stopping you from succeeding
What is the most important thing about the training courses you deliver?
Is it that you get good feedback? You get top marks on the happy sheet? The organisation is willing to invite you back for more work?
Probably all of the above, because this is largely how we measure our success.
But it’s not how we succeed.
We succeed when people learn, and more so when they implement that learning and improve their performance, improve the organisation, improve their lives!
People learn stuff when their ideas and assumptions are challenged, when they think differently, when they change something about themselves and the way they deliver within the organisation.
Most of us working in Learning and Development are passionate about this.
We want to change people’s professional lives and improve organisational performance, it’s what gets us out of bed in the morning. We care about the impact we can have and we strive to get better, even to the point of reading a blog post like this one, just in case there’s something we can learn from it!
Yet the way we measure our performance undermines our ability to have that positive impact.
We need positive feedback, we’re judged by those silly smiley happy sheets, we rely on the organisation inviting us back again and again – but to succeed we need to drag learners from their comfort zones, challenge them, and be a nuisance thorn in their complacent sides.
How our success is measured restricts us from doing what we need to do in order to succeed.
Isn’t this a conflict of interests?
My ideas around this paradox solidified when I interviewed Paul Levy for the Trainer Tools podcast – here’s a link to the “Collusion of Mediocrity” podcast interview.
Paul found himself at the end of a day’s training leafing through smiling happy sheets of positive feedback from learners who’d had a lovely day, and yet he felt completely dissatisfied by it.
He’d entertained them, they’d had a nice time, but he hadn’t driven any real change.
He hadn’t really challenged people.
He’d colluded with them and let them stay comfortably mediocre, knowing that the price to push them toward excellence would make him unpopular.
Fueled by his midlife crisis he decided that he didn’t want to be a training jester, that wasn’t what he wanted to do with his life.
If they wanted a day of mediocre entertainment, they could watch ITV or go to the circus.
So he developed the “Collusion of Mediocrity” model.
The Collusion of Mediocrity
The name of the model is deliberately controversial – it is designed to challenge the very collusion it describes.
There are four levels to it (I’m now using my words, not Paul’s):
- Say: collusion by not saying what we should be saying
- Plan: collusion by not identifying and planning the right activity
- Do: collusion by not delivering on the plan
- Keep doing: collusion by not keeping going and just relaxing back into the way things were
For the purposes of this model, “mediocrity” means anything better than “poor” but less than “excellence”.
As trainers and facilitators, it is part of our role to challenge “mediocrity” when we see it.
This could be self-limiting beliefs, it could be assumptions or paradigms, it could be excuses, it could be blind spots, it could be dishonesty, though it is often just niceness and a desire to be positive and supportive … but if the people are not really saying things as they really are, do we nod our heads and describe something mediocre and run-of-the-mill as “excellent” and “brilliant”, or do we challenge it?
Whatever we do, we need to do it carefully and skilfully. This is not about blundering in with a massive stick and breaking egos and damaging relationships, we must respect people’s willingness and ability to be challenged and must get good at doing it in a supportive and positive way.
If we do match what we say with reality, and are recognising mediocrity for what it is, do we actually plan to do something about that really meets the challenge?
So often what we plan to do is a limp sticking plaster over a gaping wound, and yet we accept this because we’ve agreed that – to paraphrase Yes Minister – something must be done, this is something, so doing it is the answer to the problem.
Our role is to demand that the action plan genuinely meets the challenge we articulated in the first phase.
If we’ve said it, and planned some good action, does it actually get delivered?
It’s very easy to create a plan, it’s not so easy to deliver it.
Our role in ensuring these plans get put into action is limited.
There’s only so much we can do because we’re not there, they get delivered when the course is over … but we can enable it by first describing the model so that there is awareness of the likelihood of non-delivery, and also encourage learners to work together as partners or small groups to help each other deliver.
If the delivery matches the ambition of the plan, and the plan meets what was said at the start, then we’ve achieved a huge amount, but in reality, most plans don’t survive in the real world for long.
When we hit obstacles, get busy, get distracted, lose energy, we soon find that our interest, motivation and confidence wanes. It’s then so easy to consider what we’ve achieved so far as being good enough.
Making learners aware that this is likely to happen helps them plan to overcome it and sustain the delivery over the longer term.
Pulling it all together
The most important way to achieve collusion-busting training aimed at excellence is to talk about it right from the start.
Articulating the model, using the language, and bringing it to life in a positive way, is the foundation for later being able to call out collusion where you see it and avoid falling into it yourself.
We may not be able to perfectly evaluate the full impact of training, but the first rule of evaluation should be “do no harm” … so maybe we can start by ditching the happy sheets as a measure of training success?