“They learn great concepts. They answer all the right questions. They know what to include in the post-assessment. And then Monday happens.”
However, That’s the knowing-doing gap — and it’s where most leadership and management training money quietly disappears. The gap isn’t between ignorance and knowledge. It’s between knowing what good leadership looks like and actually being able to do it when things get difficult, when you’re tired, when the person across the table has their own agenda. Closing that gap requires something most training formats aren’t built to provide: genuine practice under realistic conditions.
Here’s what’s actually going wrong, and what a different approach looks like.
The Box-Ticking Problem
Most leadership training today is either performative or purely cognitive — and often both. Participants learn frameworks, discuss case studies, do some reflection, maybe a role-play. They know what they’re supposed to say. They know the right answers. And the training system rewards them for knowing, because the assessments test knowledge. The actual behaviour change — the thing the organisation paid for — is treated as an inevitable downstream consequence of information transfer. It almost never is.
This isn’t a criticism of the people designing these programmes. It’s a structural problem. If your measure of success is “did participants demonstrate understanding of the framework?”, you’ll design for that. The knowing-doing gap opens precisely because understanding and doing are two completely different cognitive and physical acts. You can understand how to swim in exquisite theoretical detail and still drown in the pool.
That said, The part that’s supposed to bridge the gap — the “how to do it” — gets sold separately, usually as coaching. Which is a fine product, but an odd solution: if the original training left people knowing what to do but not how to do it, you’ve delivered half a product and charged full price for it.
What the Gap Actually Looks Like
The clearest version of this I’ve seen played out involved a startup that had scaled fast. They had two distinct cultures inside one organisation: the founding team who’d built the thing from scratch, and the professional managers hired in to systematise it. Both groups knew the theory. Both could articulate the values. Both had done the workshops.
What they hadn’t done was actually navigate the power dynamics between them — the resentment, the different assumptions about how decisions should be made, who deserved deference and why. The training had given them a shared vocabulary for leadership. It hadn’t given them a shared experience of working through something genuinely difficult together. Culture was eroding. People were leaving. The intervention that finally worked wasn’t another workshop on values alignment. It was a game called Survive Then Thrive, where they had to make real trades, withhold real information, and actually deal with each other — not perform dealing with each other.
In practice, No frameworks were presented. No slides were shown. The debrief had more content than any session I’ve run with a prepared deck, because what surfaced in the game was actual behaviour, not curated behaviour. That’s the difference.
Why Experiential Learning Addresses This Specifically
In a serious game, participants can’t perform their professional persona. The game is too immediate, too demanding, too socially complex. When someone is fully absorbed in trying to navigate a negotiation, achieve a faction goal, and manage an information disadvantage simultaneously, they don’t have the cognitive bandwidth to also perform “thoughtful collaborative leader.” They just do what they do. And what they do is the most honest data point you’ll find outside of actual work.
The EPPA loop — Experience, Patterns, Principles, Application — is how learning happens in this format. Players go through the experience fully, as themselves. The facilitator observes and notes specific behaviours. In the debrief, those patterns are surfaced and named. The group derives principles together — not principles handed down from a slide, but principles they’ve actually lived in the last 90 minutes. Then those principles get connected to specific applications in their actual jobs.
For example, The key move is what happens between patterns and principles. When I ask a participant “why did you hoard that information in round two?” and they sit with it for a moment and say “because I didn’t trust that the other team would reciprocate” — that’s not a textbook answer. That’s a real insight about how they actually operate, extracted from real behaviour. That’s what makes it stick.
What L&D Should Actually Be Evaluating
If you’re evaluating a training provider and the primary question you’re asking is “are they credentialed?” you’re asking the wrong question. Credentials matter up to a point. But they say nothing about whether the format will produce behaviour change, because most credentialling frameworks don’t assess for that either.
The question to ask is: how will the behaviour change actually happen? Not “what will participants learn?” — that’s the easy part. What mechanisms are in place for participants to practise the skill, receive real feedback on their actual behaviour (not their stated intentions), and build a system that sustains the change after the facilitator leaves the room?
Notably, If the answer involves role-plays, ask whether participants will perform the right behaviour or actually demonstrate it under pressure. The answer is almost always the former. Role-plays have their place, but they’re performative by nature. Everyone knows the “correct” way to behave, and most people deliver it. What you don’t see is what happens when the same person is under genuine cognitive and social pressure.
If the answer involves e-learning modules with game mechanics bolted on — points, badges, completion rates — ask what shared experience that creates. The answer is none. Gamified modules are better than static ones, but they’re individual experiences. Leadership is never an individual experience. It happens in rooms with other people, under conditions of ambiguity and competing interests. The training environment should reflect that.
What Good Looks Like on the Other Side
The clients who see the clearest behaviour change share a few things in common. They came in with a specific, honest problem — not the problem that looks good in a brief, but the actual thing costing them. They gave the session genuine priority: the right people in the room, phones away, the day protected. And they were willing to act on what surfaced in the debrief, including the uncomfortable parts.
In short, What they walk away with isn’t just new knowledge. It’s a shared reference point — a specific experience the whole group went through together, that they can refer back to. “Remember what happened in round three?” becomes a live tool for the team. The vocabulary built in the session doesn’t disappear when the facilitator leaves, because it was built from their own experience, not transplanted from a textbook.
At PutThePlayerFirst.com, the design question I start with is always the same: what does the behaviour change look like, and how will I know if it’s happening? Everything else — the game mechanics, the theme, the debrief structure — is downstream of that. If a training provider can’t answer that question clearly and specifically for your context, you haven’t found the right provider. You’ve found someone who will give you a workshop that feels valuable and is very hard to evaluate either way.
The knowing-doing gap doesn’t close because someone gave a good presentation about it. It closes because someone found a way to put you in a situation where you had to do the thing — and then helped you understand what you actually did.