Most of us think that applied knowledge consists of learning generalisable principles and THEN looking for places to apply them.
In this view, cases are simply examples of the principle in action.
But check this out:
The source is Spiro et al’s Cognitive Flexibility Theory, and most of the examples in the paper are about medical education. (A highly applied field, albeit on a messy, complex biological system — but at least with some settled science!)
Now consider how this might apply when talking about business education.
Business is messier — there isn’t ‘settled science’.
So there’s probably more to be said for reading messy business biographies + the ‘case method’ over imbibing contextless business frameworks.
Why? The argument the authors make is that at the higher levels of expertise, you start to grapple with the fact that everything is connected to every other damn thing.
Teaching concepts atomically hinders the student’s ability to apply them in real world cases.
So the next question is obviously: what do you do when you’re studying cases?
The authors suggest something surprising to me: you mark up all the possible concepts that are instantiated in each case (and here you might need an expert) and then link to all OTHER cases.
This forces the learner to grapple with the real complexity of reality, instead of learning just the clean simple abstractions that frameworks seem to offer.
And the reality of medicine (and business) is that everything is messier than you think.
A couple of follow-up thoughts: first, the authors write about a learning system they developed called Cardioworld Explorer, which means there’s probably some empirical results I can look up.
I’m planning to dig into that later.
Second, this DOES sound like the ‘backlinking’ and ‘complex shared knowledge networks’ that the tools for thought people keep harping about, doesn’t it?
Except the authors here focus on the cognitive science of learning, not the trappings of the tool itself.
The important thing to focus on seems to be:
1. You encode a multi-dimensional set of concepts for each case. 2. These concepts link cases together. 3. You are required to read through the messiness of each case, which is described in prose. (Though snippets may be recombined).
More importantly, the pedagogical goal:
You want the system to expose you to as MANY possible instantiations of the concept as possible.
This probably requires somebody with expertise to come in and link things for you, though — they are likely able to spot cues that you can’t.
Actually my language here is problematic — I say things like ‘instantiations’ as if the concept is more important than example.
But the point the authors make is that principles in messy domains ONLY make sense through cases.
Expert practitioners reason by comparison to many other cases, recombining bits of prior cases in their heads.
Principles only make sense when expressed through cases.
So you can’t ‘teach the principle first’; you always have to teach the cases together.
Huge caveat: everything in this paper/thread is about learning in ill-structured domains, where 'ill-structured' means a domain where no universally generalisable principle may be extracted from the average case (think less math and more business, investing, or medicine).
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I want to call out an example of some remarkable thinking that I've had the privilege of observing up close.
About 2 years ago, @vaughn_tan started a project to come up with better thinking around 'uncertainty'. This MIGHT be important to business! MIGHT! But I was unconvinced.
Vaughn had noticed that our collective ability to deal with uncertainty was compromised by bad language. Because we do not have good language for uncertainty, we are forced to borrow words and concepts from risk management.
But this is bad: risk is VERY diff from uncertainty!
I was in good company in my scepticism, though. Vaughn's friend, the notable VC Jerry Neumann, told him that he was sceptical Vaughn's project would be very useful.
Neumann argued that it wasn't important to know what types of uncertainty exist — merely how to use it.
I once had an intern do an internship with me because she wanted to see how I approached 'startup things'. At the end of the summer, she was surprised that I didn't have a set of hypotheses to test.
"Doesn't this go against the data-driven approach you talked about?" she asked.
I didn't have the language for it then, but I think I do now.
When an initiative / product / project is too new, there is too much uncertainty to form useful hypotheses.
Instead, what you want to do is to just "throw shit at the wall and see what sticks."
This sounds woefully inefficient, but it's not, not really. A slightly more palatable frame for this is "take action to generate information."
But what kind of information?
Actually I was looking for answers to the following four questions:
A gentle reminder that if you want to speed up expertise intuition, you will do a lot better if you have an actual mental model of what expert intuition *is*.
The most useful model is the one below:
It gives you more handles on how to improve.
The name of the model is the 'recognition primed decision making' model, or RPD.
The basic idea is simple: when an expert looks at a situation, they generate four things automatically:
1. Cues 2. Expectancies 3. Possible goals 4. An action script.
You can target each.
For instance, if you're a software engineer and you want to get better from the tacit knowledge of the senior programmers around you, ask:
- What cues did you notice?
- What were your expectancies?
- What was your action script?
1. DP is a sleight of hand research paradigm, and only claims to be the best way to get to expertise in fields with a good history of pedagogical development. (See: The Cambridge Handbook, where they point out that pop stars and jazz musicians become world class but not through DP)
2. Most of us are not in such domains.
3. Therefore we cannot use DP, and tacit knowledge elicitation methods are more appropriate.
The counter argument @justinskycak needs to make is simple: math is a domain with a long history of pedagogical development, therefore DP dominates.
Justin says that “talent is overrated” is not part of the DP argument.
I’m not sure what he’s read from Ericsson that makes him think that.
Hambrick et al document the MANY instances where Ericsson makes the claim “DP is the gold standard and therefore anyone can use DP to get good, practice dominates talent.”
Ericsson spends the entire introduction of Peak arguing this. When Ericsson passed, David Epstein wrote a beautiful eulogy but referenced his being a lifelong proponent of the ‘talent is overrated’ camp, which frustrated him and other expertise researchers to no end.
Now you may say that DP has nothing to say on talent, but then you have to grapple with the man making the argument in DECADES of publications — both academic and popular! If the man who INVENTED the theory sees the theory as a WAY TO ADVANCE his views on talent, then … I don’t know, maybe one should take the man at his word?
“Oh, but his views have NOTHING to do with the actual theory of DP” My man, if you’re talking to anyone who has ACTUALLY read DP work, you need to address this, because they’re going to stumble into it. Like, I don’t know, in the INTRODUCTION CHAPTER OF THE POPSCI BOOK ON DP.
Anyway, strike two for reading comprehension problems. But it gets worse …