Totally obvious once you know it exists, but training methods for when you can extract tacit expertise (assuming you have access to experts, and the skill to do so) look VERY different from training methods without such an ability.
Example: tacit expertise is basically a pattern recognition process that generates 4 things: cues, expectancies, a prioritised and fluid list of goals, and an action script.
So, with this in mind, your training program ends up looking like a series of scenario simulations.
This is very different from the ‘pedagogical development and subskill identification’ view of teaching.
Here it’s “what series of varied scenarios may I design that allows students to gain the right set of cues, expectancies, goals, and actions that experts tacitly generate?”
In other words, you don’t need to distill everything into a framework if you have the exact set of cues, expectancies, etc that an expert has; you can just train the mental models directly, via simulation exercises.
For many years I’ve worked with better programmers, who are able to — through a combination of intuition and prototyping — pick out program structures that work. Whereas if I did them we’d have to redo things a few months down the road.
I wanted this skill for myself.
I thought that I would have to synthesise their tacit mental models into a framework.
I now see that’s mistaken. All I need to do is to extract the cues, expectancies, goals, and actions in their heads, and then design a set of simulation exercises that force me to mimic them.
This is a much lower bar than doing ‘proper’ pedagogical design or syllabus development.
I mean, looking back, this is so obviously the central thread that ties together most of Naturalistic Decision Making’s training programs: commoncog.com/blog/creating-…
(NDM is the field that specialises in techniques that can extract tacit mental models of expertise).
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I want to call out an example of some remarkable thinking that I've had the privilege of observing up close.
About 2 years ago, @vaughn_tan started a project to come up with better thinking around 'uncertainty'. This MIGHT be important to business! MIGHT! But I was unconvinced.
Vaughn had noticed that our collective ability to deal with uncertainty was compromised by bad language. Because we do not have good language for uncertainty, we are forced to borrow words and concepts from risk management.
But this is bad: risk is VERY diff from uncertainty!
I was in good company in my scepticism, though. Vaughn's friend, the notable VC Jerry Neumann, told him that he was sceptical Vaughn's project would be very useful.
Neumann argued that it wasn't important to know what types of uncertainty exist — merely how to use it.
I once had an intern do an internship with me because she wanted to see how I approached 'startup things'. At the end of the summer, she was surprised that I didn't have a set of hypotheses to test.
"Doesn't this go against the data-driven approach you talked about?" she asked.
I didn't have the language for it then, but I think I do now.
When an initiative / product / project is too new, there is too much uncertainty to form useful hypotheses.
Instead, what you want to do is to just "throw shit at the wall and see what sticks."
This sounds woefully inefficient, but it's not, not really. A slightly more palatable frame for this is "take action to generate information."
But what kind of information?
Actually I was looking for answers to the following four questions:
A gentle reminder that if you want to speed up expertise intuition, you will do a lot better if you have an actual mental model of what expert intuition *is*.
The most useful model is the one below:
It gives you more handles on how to improve.
The name of the model is the 'recognition primed decision making' model, or RPD.
The basic idea is simple: when an expert looks at a situation, they generate four things automatically:
1. Cues 2. Expectancies 3. Possible goals 4. An action script.
You can target each.
For instance, if you're a software engineer and you want to get better from the tacit knowledge of the senior programmers around you, ask:
- What cues did you notice?
- What were your expectancies?
- What was your action script?
1. DP is a sleight of hand research paradigm, and only claims to be the best way to get to expertise in fields with a good history of pedagogical development. (See: The Cambridge Handbook, where they point out that pop stars and jazz musicians become world class but not through DP)
2. Most of us are not in such domains.
3. Therefore we cannot use DP, and tacit knowledge elicitation methods are more appropriate.
The counter argument @justinskycak needs to make is simple: math is a domain with a long history of pedagogical development, therefore DP dominates.
Justin says that “talent is overrated” is not part of the DP argument.
I’m not sure what he’s read from Ericsson that makes him think that.
Hambrick et al document the MANY instances where Ericsson makes the claim “DP is the gold standard and therefore anyone can use DP to get good, practice dominates talent.”
Ericsson spends the entire introduction of Peak arguing this. When Ericsson passed, David Epstein wrote a beautiful eulogy but referenced his being a lifelong proponent of the ‘talent is overrated’ camp, which frustrated him and other expertise researchers to no end.
Now you may say that DP has nothing to say on talent, but then you have to grapple with the man making the argument in DECADES of publications — both academic and popular! If the man who INVENTED the theory sees the theory as a WAY TO ADVANCE his views on talent, then … I don’t know, maybe one should take the man at his word?
“Oh, but his views have NOTHING to do with the actual theory of DP” My man, if you’re talking to anyone who has ACTUALLY read DP work, you need to address this, because they’re going to stumble into it. Like, I don’t know, in the INTRODUCTION CHAPTER OF THE POPSCI BOOK ON DP.
Anyway, strike two for reading comprehension problems. But it gets worse …