2/ David Clarke's review seems like it's written by someone with a background in psychology; he points out that Barrett presents her theory as being close to consensus, but in more scholarly publications, she is reserved and says more work needed. goodreads.com/review/show/29…
3/ And this question ("Can anyone point me to a review that would indicate how well-received this research is received in the professional community?") contains good answers — at least, solid enough to kickstart a dive into the more scholarly sources. goodreads.com/questions/1446…
4/ So within a few seconds of hitting Goodreads, I know:
a) She was the last editor of 'The Handbook of Emotions', and I should check that.
b) She's more tempered in academic settings (time to hop onto Google Scholar?)
c) My next step: academic reviews of her book.
5/ Of course, Goodreads
a) loads slower than a herd of snails racing through peanut butter
b) has star ratings that are next to useless
and c) reviews that are all over the place
But it's not too bad for what it is.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I want to call out an example of some remarkable thinking that I've had the privilege of observing up close.
About 2 years ago, @vaughn_tan started a project to come up with better thinking around 'uncertainty'. This MIGHT be important to business! MIGHT! But I was unconvinced.
Vaughn had noticed that our collective ability to deal with uncertainty was compromised by bad language. Because we do not have good language for uncertainty, we are forced to borrow words and concepts from risk management.
But this is bad: risk is VERY diff from uncertainty!
I was in good company in my scepticism, though. Vaughn's friend, the notable VC Jerry Neumann, told him that he was sceptical Vaughn's project would be very useful.
Neumann argued that it wasn't important to know what types of uncertainty exist — merely how to use it.
I once had an intern do an internship with me because she wanted to see how I approached 'startup things'. At the end of the summer, she was surprised that I didn't have a set of hypotheses to test.
"Doesn't this go against the data-driven approach you talked about?" she asked.
I didn't have the language for it then, but I think I do now.
When an initiative / product / project is too new, there is too much uncertainty to form useful hypotheses.
Instead, what you want to do is to just "throw shit at the wall and see what sticks."
This sounds woefully inefficient, but it's not, not really. A slightly more palatable frame for this is "take action to generate information."
But what kind of information?
Actually I was looking for answers to the following four questions:
A gentle reminder that if you want to speed up expertise intuition, you will do a lot better if you have an actual mental model of what expert intuition *is*.
The most useful model is the one below:
It gives you more handles on how to improve.
The name of the model is the 'recognition primed decision making' model, or RPD.
The basic idea is simple: when an expert looks at a situation, they generate four things automatically:
1. Cues 2. Expectancies 3. Possible goals 4. An action script.
You can target each.
For instance, if you're a software engineer and you want to get better from the tacit knowledge of the senior programmers around you, ask:
- What cues did you notice?
- What were your expectancies?
- What was your action script?
1. DP is a sleight of hand research paradigm, and only claims to be the best way to get to expertise in fields with a good history of pedagogical development. (See: The Cambridge Handbook, where they point out that pop stars and jazz musicians become world class but not through DP)
2. Most of us are not in such domains.
3. Therefore we cannot use DP, and tacit knowledge elicitation methods are more appropriate.
The counter argument @justinskycak needs to make is simple: math is a domain with a long history of pedagogical development, therefore DP dominates.
Justin says that “talent is overrated” is not part of the DP argument.
I’m not sure what he’s read from Ericsson that makes him think that.
Hambrick et al document the MANY instances where Ericsson makes the claim “DP is the gold standard and therefore anyone can use DP to get good, practice dominates talent.”
Ericsson spends the entire introduction of Peak arguing this. When Ericsson passed, David Epstein wrote a beautiful eulogy but referenced his being a lifelong proponent of the ‘talent is overrated’ camp, which frustrated him and other expertise researchers to no end.
Now you may say that DP has nothing to say on talent, but then you have to grapple with the man making the argument in DECADES of publications — both academic and popular! If the man who INVENTED the theory sees the theory as a WAY TO ADVANCE his views on talent, then … I don’t know, maybe one should take the man at his word?
“Oh, but his views have NOTHING to do with the actual theory of DP” My man, if you’re talking to anyone who has ACTUALLY read DP work, you need to address this, because they’re going to stumble into it. Like, I don’t know, in the INTRODUCTION CHAPTER OF THE POPSCI BOOK ON DP.
Anyway, strike two for reading comprehension problems. But it gets worse …