Cedric Chin Profile picture
Sep 8, 2021 24 tweets 7 min read Twitter logo Read on Twitter
1/ Let's talk about accelerating expertise.

You want to get good. You want to get good fast. How do you do this?

In 2008 and 2009 the US Department of Defence convened two meetings on this very topic.

Here's what they found. (Hint: the answer is NOT deliberate practice). Image
2/ First: let's put ourselves in the position of a training program designer.

You are told that you need to get a bunch of novices to a good level of competency in 3 months.

It used to take them a year.

How do you do this?
3/ If you're like most people, you'd probably say "Ok, let's create a skill tree. Let's map out all the skills needed from most basic to most advanced. Then let's design a syllabus, where complex skills build on simpler skills."

In other words, you'd replicate school.
4/ Aaand you would have failed your assignment.

The researchers who the DoD consulted said, basically: "No. Stop. Get rid of all that."

Your approach, the mainstream pedagogical approach, is too slow.

But why is it too slow?
5/ For two reasons:

1. Teaching novices atomised skills means they will build incomplete mental models of a domain. At some point, these incomplete models will interfere with progress. They become knowledge shields. You now have more work: you will need to break those shields.
6/ But the second reason is more pernicious.

2. Experts are able to see connections between concepts that novices cannot. Teaching novices a hierarchy of skills usually prevents them from learning those linkages early.

In other words, they're likely to get stuck.

So: SLOWWWW.
7/ So what do you do?

Well, you cheat.

It turns out that if you can go to domain experts and EXTRACT their mental models of expertise, you can use those models for training!

This means you'll be able to train for what the experts ACTUALLY HAVE in their heads.
8/ The set of techniques that allow you to extract mental models of expertise is called 'Cognitive Task Analysis'. It's been around for 30 years now.

You know how experts can't really explain how they 'know' things? Yeah. CTA gets around that.
9/ I've written about CTA in the past. For instance, I helped @johncutlefish with some skill extraction a few weeks ago. You may read about that experience here: commoncog.com/blog/john-cutl…

And the most comprehensive book on it is this one: goodreads.com/book/show/4433…
10/ Anyway, back to accelerating expertise. So you now know there is this superpower called CTA. Well, how do these researchers use it?

The short answer is that they use it to create training simulations, so that students CONSTRUCT the mental model that the experts have.
11/ Here's how they do it:

1. They identify the domain experts.
2. They do CTA.
3. During CTA, they collect details of difficult cases to build a case library.
4. They turn that case library into a set of training simulations.
5. They sort the scenarios according to difficulty.
12/ The training simulations serve as the training program.

This is much better, because:

1. Good simulations have good cognitive fidelity to the real work task. Performance transfers.
2. There is no artificial atomisation of concepts! Learners must deal with full complexity!
13/ Ok, here's an example. Trigger warning: Afghanistan, IEDs, military. Skip ahead if necessary.

After 9/11 the US military had problems with IEDs. These were roadside bombs. Think: Hurt Locker. The DoD started spending a lot of money to detect and defeat IEDs. Image
14/ As part of that effort, the DoD commissioned a CTA. Apparently some of the Marines and Soldiers were able to detect IEDs. They would 'have a bad feeling', and take measures to avoid a danger zone.

The military wanted to know how. If they could extract, they could train.
15/ The group of NDM researchers quickly realised this was a bloody difficult skill domain. Think about it: Iraq is large. Within Iraq, different towns and even neighbourhoods had different IED tactics. And Afghanistan was different still.

Plus the enemy was constantly adapting.
16/ And they needed to extract something general. Something that would work regardless of where a young Marine was deployed.

Eventually they realised that the most skilled Marines were putting themselves in the insurgent's shoes.

They could think like an IED emplacer. Image
17/ Think about it: if you wanted to emplace an IED, how would you trigger it? Say you trigger wirelessly. You would need a spotter. You would need to know when the Marine convoy was near enough to the bomb.

So the insurgents would use a marker. Like a pole, or a rock formation.
18/ These were the cues the Marines were picking up on.

The researchers had successfully extracted this mental model of expertise. Now: how to train?

Ask yourself this: would you set up a Powerpoint presentation? A lecture of IED tactics?

That would be dumb.
19/ Here's what the researchers did: they took a video game that the military used for training (called VBS) and built a module for it.

The players had to play AS an insurgent.

They had to emplace IEDs and target blue team convoys. This is what one of the researchers said: ImageImage
20/ Note how rapid the training could be. Note how quickly you could enable the construction of the actual mental model.

Eventually, Marines and Soldiers would play a few scenarios before deployment. It saved lives.
21/ Let's wrap up. I've described an accelerated expertise training program, developed by applied researchers in military and industry contexts.

It is remarkably novel. I've written about some of the underlying theories before:

22/ And it's just scratching the surface. For a full summary, including some other uses of the research, read my blog post here: commoncog.com/blog/accelerat…
23/ Follow for more threads about expertise, business decision making, and so on.

Or subscribe to my newsletter if you don't want to miss out on longform essays here: commoncog.com/blog/subscribe…

Thank you for reading!
PS: If you want to learn CTA, you may sign up for a course here: cta.institute

It's run by the OG researchers who invented some of the techniques. 😊

One of them was involved with the IED project.

I've signed up, and I encourage you to do so too!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Cedric Chin

Cedric Chin Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ejames_c

Mar 4
My article on Goodhart’s Law was on the Hacker News front page last night.

I thought this was a good comment (just about every other comment was bad).
The Goodhart’s Law piece may be found here (and the comment refers to the lengthy case study of the Amazon Weekly Business Review in action): commoncog.com/goodharts-law-…
Again, bear in mind that the HN comment is one person’s opinion, but I find it plausible.

Caveats: I’ll need multiple corroborating accounts that say similar things to increase my confidence in this take; if true, I’m interested to know when the decay started.
Read 5 tweets
Feb 25
I’ve been thinking about Amazon’s “Are Right, A Lot” Leadership Principle again. It’s probably the most controversial of the Leadership Principles, and one I’ve spent the most time thinking about (and asking ex-Amazonians about).
The first thing that usually gets pointed out is that it sits between two other principles that intentionally balance it out:
I once asked an ex Amazon exec why the S-team were willing to wait 2 years before Prime’s metrics confirmed their initial hypothesis (Prime was existentially risky), and he said, basically “Jeff had a history of being right a lot, so we gave him the benefit of the doubt.”
Read 13 tweets
Feb 8
This week’s Commoncog essay is about How to Become Data Driven, at least according to Statistical Process Control.

Their answer to that question is, perhaps, surprising: SPC practitioners believe the beginning of becoming data driven is … (drum rolls) understanding variation.
Wait, understanding variation?

Yes, understanding variation.

Deming and his colleagues (who started this whole SPC thing) argue that you cannot use data to make decisions if you don’t know what is routine variation or exceptional variation.

One is signal, the other is noise.
That sounds like common sense, but give it a minute — SPC practitioners take this single idea and then built an entire approach to data-driven decision making on top of It.

The ideas build on the core insight that you really ONLY want to investigate exceptional variation.
Read 18 tweets
Feb 7
One thing that doesn’t come through as viscerally in the deliberate practice literature is how much doing DP is basically dealing, psychologically, with repeated defeat.
Stupid errors, dumb errors, failed attempts at executing sub-skills, exercises that zoom in on the worst parts of your game, thus constantly putting you in situations where you’re designed to fail.
The term used in the research literature is ‘desirable difficulties’. Seems nice in the abstract. Not so nice in practice.
Read 4 tweets
Feb 3
It’s surprisingly difficult to communicate the core ideas of statistical process control.

I’ve been banging my head on this @commoncog piece for the good part of three weeks now, and I’m trying to capture the intuition without making it trite or getting into the maths
I think part of the difficulty is that so much of SPC’s ideas sound boring and trivial when stated up front, and only show their power through worked examples.

For instance: “understanding variation is the beginning of becoming data driven.”

And: “management is prediction.”
Another one: “the role of the businessperson is to seek knowledge, not truth. There is no such thing as truth in business. There is only knowledge, and knowledge gives you the ability to make predictions.”
Read 7 tweets
Jan 30
This is exactly the right question to ask.

tl;dr — so much talk of stacks and data engineering and modeling where outcomes and clear and easy; so little talk of making orgs data driven or changing org culture or making better decisions.
Before you throw your hands up and say this is hard and that there are no good solutions to this — I'm here to tell you that there IS a solution, there is a body of work that has tackled all of these problems directly, that it is decades old, and has created real world impact.
The body of work has a terrible name, of course. It's called 'statistical process control'.

(Which probably explains why data folk and people who are interested in making orgs more data driven have never heard of it).
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(