You want to get good. You want to get good fast. How do you do this?
In 2008 and 2009 the US Department of Defence convened two meetings on this very topic.
Here's what they found. (Hint: the answer is NOT deliberate practice).
2/ First: let's put ourselves in the position of a training program designer.
You are told that you need to get a bunch of novices to a good level of competency in 3 months.
It used to take them a year.
How do you do this?
3/ If you're like most people, you'd probably say "Ok, let's create a skill tree. Let's map out all the skills needed from most basic to most advanced. Then let's design a syllabus, where complex skills build on simpler skills."
In other words, you'd replicate school.
4/ Aaand you would have failed your assignment.
The researchers who the DoD consulted said, basically: "No. Stop. Get rid of all that."
Your approach, the mainstream pedagogical approach, is too slow.
But why is it too slow?
5/ For two reasons:
1. Teaching novices atomised skills means they will build incomplete mental models of a domain. At some point, these incomplete models will interfere with progress. They become knowledge shields. You now have more work: you will need to break those shields.
6/ But the second reason is more pernicious.
2. Experts are able to see connections between concepts that novices cannot. Teaching novices a hierarchy of skills usually prevents them from learning those linkages early.
In other words, they're likely to get stuck.
So: SLOWWWW.
7/ So what do you do?
Well, you cheat.
It turns out that if you can go to domain experts and EXTRACT their mental models of expertise, you can use those models for training!
This means you'll be able to train for what the experts ACTUALLY HAVE in their heads.
8/ The set of techniques that allow you to extract mental models of expertise is called 'Cognitive Task Analysis'. It's been around for 30 years now.
You know how experts can't really explain how they 'know' things? Yeah. CTA gets around that.
9/ I've written about CTA in the past. For instance, I helped @johncutlefish with some skill extraction a few weeks ago. You may read about that experience here: commoncog.com/blog/john-cutl…
10/ Anyway, back to accelerating expertise. So you now know there is this superpower called CTA. Well, how do these researchers use it?
The short answer is that they use it to create training simulations, so that students CONSTRUCT the mental model that the experts have.
11/ Here's how they do it:
1. They identify the domain experts. 2. They do CTA. 3. During CTA, they collect details of difficult cases to build a case library. 4. They turn that case library into a set of training simulations. 5. They sort the scenarios according to difficulty.
12/ The training simulations serve as the training program.
This is much better, because:
1. Good simulations have good cognitive fidelity to the real work task. Performance transfers. 2. There is no artificial atomisation of concepts! Learners must deal with full complexity!
13/ Ok, here's an example. Trigger warning: Afghanistan, IEDs, military. Skip ahead if necessary.
After 9/11 the US military had problems with IEDs. These were roadside bombs. Think: Hurt Locker. The DoD started spending a lot of money to detect and defeat IEDs.
14/ As part of that effort, the DoD commissioned a CTA. Apparently some of the Marines and Soldiers were able to detect IEDs. They would 'have a bad feeling', and take measures to avoid a danger zone.
The military wanted to know how. If they could extract, they could train.
15/ The group of NDM researchers quickly realised this was a bloody difficult skill domain. Think about it: Iraq is large. Within Iraq, different towns and even neighbourhoods had different IED tactics. And Afghanistan was different still.
Plus the enemy was constantly adapting.
16/ And they needed to extract something general. Something that would work regardless of where a young Marine was deployed.
Eventually they realised that the most skilled Marines were putting themselves in the insurgent's shoes.
They could think like an IED emplacer.
17/ Think about it: if you wanted to emplace an IED, how would you trigger it? Say you trigger wirelessly. You would need a spotter. You would need to know when the Marine convoy was near enough to the bomb.
So the insurgents would use a marker. Like a pole, or a rock formation.
18/ These were the cues the Marines were picking up on.
The researchers had successfully extracted this mental model of expertise. Now: how to train?
Ask yourself this: would you set up a Powerpoint presentation? A lecture of IED tactics?
That would be dumb.
19/ Here's what the researchers did: they took a video game that the military used for training (called VBS) and built a module for it.
The players had to play AS an insurgent.
They had to emplace IEDs and target blue team convoys. This is what one of the researchers said:
20/ Note how rapid the training could be. Note how quickly you could enable the construction of the actual mental model.
Eventually, Marines and Soldiers would play a few scenarios before deployment. It saved lives.
21/ Let's wrap up. I've described an accelerated expertise training program, developed by applied researchers in military and industry contexts.
It is remarkably novel. I've written about some of the underlying theories before:
22/ And it's just scratching the surface. For a full summary, including some other uses of the research, read my blog post here: commoncog.com/blog/accelerat…
23/ Follow for more threads about expertise, business decision making, and so on.
To establish some credibility: I built a debate club in high school, which imploded.
I thought, "hmm, this seems like a useful skill to learn".
I then built the NUS Hackers, which has persisted for 8 years now, and remains the best place to hire software engineers in Singapore.
And then I went to Vietnam, built out an engineering office there, and tweaked the departments adjacent to our office, and now, 3 years later, the org has retained 75% of the people I hired, and are still run using many of the same policies/incentives I designed.
We talk about ways programmers harm themselves in their careers, mistakes non-technical people make when dealing with programmers, and what it was like pushing the boundaries of property testing.
Also, possibly the best piece of fiction you'll ever read about software testing (I know, I know, but truly, it's great): archiveofourown.org/works/3673335
1/ One of my most persistent irritations is with the whole 'OH YOU NEED TO DO DELIBERATE PRACTICE' meme.
Ugh, no, perhaps you don't. It depends on your domain. Deliberate practice has problems. Have you even tried?
I've written about this before, but here's a thread.
2/ First: DP is a real theory, and it's one of the greatest contributions to our understanding of expertise.
It is a technical term. It does NOT mean 'practicing deliberately'. We'll define it soon.
My problems with it stem mostly from trying to apply it, and failing miserably.
3/ Ok, let's define DP. To make things a little complicated, DP is tricky to define because K Anders Ericsson has been inconsistent with definitions throughout his career (see pic, from The Science of Expertise, Hambrick et al).
1/ I've been reflecting on why I found @LiaDiBello4's extracted mental model of business so compelling.
I mean, my reaction was mostly: "ALL great businesspeople share a common mental model of business? The model is a triad of supply, demand and capital? YES THIS MUST BE RIGHT."
1/ Yesterday I talked about Cognitive Transformation Theory, a learning theory that tells us that how good you are at learning from the real world depends on how good you are at UNLEARNING mental models.
2/ In 1993, Clark Chinn and William Brewer published a famous paper on how science students react to anomalous data — data that clashed with their mental models of the world.
They then drew on the history of science to show how common these reactions are amongst scientists.
3/ It turns out there are basically only 7 ways you can respond to inconvenient data. 6 of them allow you to preserve your existing mental models.
See if any of these are familiar to you, before we go through them in order:
US Military, Naturalistic Decision Making researchers: "in order to accelerate expertise, we need to design our training programs to destroy existing mental models"
Good businesspeople: "how can we distill wisdom from the air?"
Clarification on the 'distill wisdom from the air' bit — that's from Robert Kuok's biography, in reference to the way uneducated Chinese businessmen learn. Mostly by reflecting on experiences and observing widely.
There was a meme sometime back on “what is the deliberate practice of your domain?” With this theory of learning, we can say that the question is ill-formed, because DP can only be done in domains with clear pedagogical development, with a coach who has that pedagogy.