You want to get good. You want to get good fast. How do you do this?
In 2008 and 2009 the US Department of Defence convened two meetings on this very topic.
Here's what they found. (Hint: the answer is NOT deliberate practice).
2/ First: let's put ourselves in the position of a training program designer.
You are told that you need to get a bunch of novices to a good level of competency in 3 months.
It used to take them a year.
How do you do this?
3/ If you're like most people, you'd probably say "Ok, let's create a skill tree. Let's map out all the skills needed from most basic to most advanced. Then let's design a syllabus, where complex skills build on simpler skills."
In other words, you'd replicate school.
4/ Aaand you would have failed your assignment.
The researchers who the DoD consulted said, basically: "No. Stop. Get rid of all that."
Your approach, the mainstream pedagogical approach, is too slow.
But why is it too slow?
5/ For two reasons:
1. Teaching novices atomised skills means they will build incomplete mental models of a domain. At some point, these incomplete models will interfere with progress. They become knowledge shields. You now have more work: you will need to break those shields.
6/ But the second reason is more pernicious.
2. Experts are able to see connections between concepts that novices cannot. Teaching novices a hierarchy of skills usually prevents them from learning those linkages early.
In other words, they're likely to get stuck.
So: SLOWWWW.
7/ So what do you do?
Well, you cheat.
It turns out that if you can go to domain experts and EXTRACT their mental models of expertise, you can use those models for training!
This means you'll be able to train for what the experts ACTUALLY HAVE in their heads.
8/ The set of techniques that allow you to extract mental models of expertise is called 'Cognitive Task Analysis'. It's been around for 30 years now.
You know how experts can't really explain how they 'know' things? Yeah. CTA gets around that.
9/ I've written about CTA in the past. For instance, I helped @johncutlefish with some skill extraction a few weeks ago. You may read about that experience here: commoncog.com/blog/john-cutl…
10/ Anyway, back to accelerating expertise. So you now know there is this superpower called CTA. Well, how do these researchers use it?
The short answer is that they use it to create training simulations, so that students CONSTRUCT the mental model that the experts have.
11/ Here's how they do it:
1. They identify the domain experts. 2. They do CTA. 3. During CTA, they collect details of difficult cases to build a case library. 4. They turn that case library into a set of training simulations. 5. They sort the scenarios according to difficulty.
12/ The training simulations serve as the training program.
This is much better, because:
1. Good simulations have good cognitive fidelity to the real work task. Performance transfers. 2. There is no artificial atomisation of concepts! Learners must deal with full complexity!
13/ Ok, here's an example. Trigger warning: Afghanistan, IEDs, military. Skip ahead if necessary.
After 9/11 the US military had problems with IEDs. These were roadside bombs. Think: Hurt Locker. The DoD started spending a lot of money to detect and defeat IEDs.
14/ As part of that effort, the DoD commissioned a CTA. Apparently some of the Marines and Soldiers were able to detect IEDs. They would 'have a bad feeling', and take measures to avoid a danger zone.
The military wanted to know how. If they could extract, they could train.
15/ The group of NDM researchers quickly realised this was a bloody difficult skill domain. Think about it: Iraq is large. Within Iraq, different towns and even neighbourhoods had different IED tactics. And Afghanistan was different still.
Plus the enemy was constantly adapting.
16/ And they needed to extract something general. Something that would work regardless of where a young Marine was deployed.
Eventually they realised that the most skilled Marines were putting themselves in the insurgent's shoes.
They could think like an IED emplacer.
17/ Think about it: if you wanted to emplace an IED, how would you trigger it? Say you trigger wirelessly. You would need a spotter. You would need to know when the Marine convoy was near enough to the bomb.
So the insurgents would use a marker. Like a pole, or a rock formation.
18/ These were the cues the Marines were picking up on.
The researchers had successfully extracted this mental model of expertise. Now: how to train?
Ask yourself this: would you set up a Powerpoint presentation? A lecture of IED tactics?
That would be dumb.
19/ Here's what the researchers did: they took a video game that the military used for training (called VBS) and built a module for it.
The players had to play AS an insurgent.
They had to emplace IEDs and target blue team convoys. This is what one of the researchers said:
20/ Note how rapid the training could be. Note how quickly you could enable the construction of the actual mental model.
Eventually, Marines and Soldiers would play a few scenarios before deployment. It saved lives.
21/ Let's wrap up. I've described an accelerated expertise training program, developed by applied researchers in military and industry contexts.
It is remarkably novel. I've written about some of the underlying theories before:
22/ And it's just scratching the surface. For a full summary, including some other uses of the research, read my blog post here: commoncog.com/blog/accelerat…
23/ Follow for more threads about expertise, business decision making, and so on.
One thing I’ve been thinking about, related to yesterday’s Commoncog essay, is that effective people tend to be perfectly ok doing things that work, without immediate care for theory.
Theory can catch up later.
Contrast that to folk who want models for everything they do, and will eagerly tell you their latest pet model / framework / theory for sales or whatever, and it all sounds very sophisticated, and you check their track records and indeed they’re not very … good?
It’s the same sort of affliction that produces this sort of thinking:
Friend sent me the Ribbonfarm is Retiring piece and it contains some neat observations (blogging was a ZIRP phenomenon … except I was active in blogging in 2005? And Technorati was ascendant in 2006?)
Ultimately it was classic Ribbonfarm, right up to the end.
And by that I mean there are folks who make up models to be useful, and there are folks who make up models for the sake of making up models, accuracy or usefulness be damned, and Ribbonfarm belonged to the latter.
But it was a stalwart of the blogosphere, and for that I salute it 🫡
I am legitimately surprised so many people are citing Paul Graham’s “Founder Mode”
so uncritically.
Yes, we know founder-led companies are run better + differently. There are decades of evidence for that. But ‘founder mode’ is so vague that it’s untestable.
“Founder mode good. Founder mode do thing different from manager mode. Founder mode is run company different from manager.”
The information content of the essay is nearly zero, if you’re practically minded.
Because how are you going to test this to your own context?!
Even the evidence that pg presents for ‘founder mode good’ isn’t a slam dunk. “Airbnb's free cash flow margin is now among the best in Silicon Valley.”
Is it because of founder mode, or is it because Airbnb has a negative cash conversion cycle, and it’s now well tuned?
When @sjataylor and I started working on Xmrit together, one of the big questions we had was: why haven’t these methods spread outside of manufacturing?
In a previous life, Sam worked IN manufacturing. He was highly skeptical that it could be applied more generally.
“Except for healthcare” he joked, “in Six Sigma healthcare was always the pet example they rolled out when they wanted to say ‘Look! It’s used outside factories!’”
Well, the existence of (and my essays on Commoncog) is clear evidence that XmR charts ARE more broadly usable.