To establish some credibility: I built a debate club in high school, which imploded.
I thought, "hmm, this seems like a useful skill to learn".
I then built the NUS Hackers, which has persisted for 8 years now, and remains the best place to hire software engineers in Singapore.
And then I went to Vietnam, built out an engineering office there, and tweaked the departments adjacent to our office, and now, 3 years later, the org has retained 75% of the people I hired, and are still run using many of the same policies/incentives I designed.
To be fair to Bloom, perhaps the incentive design he talks about applies to deal-making. Or perhaps the ideas are meant in an atomised, theoretical way.
But every idea in there is almost certainly going to trip you up in bad ways if you tried to apply them to your org.
The basic tools when it comes to org design looks like this:
1. You have an accurate model of the people you are dealing with. This is context dependent. Salespeople respond to incentives differently from engineers. 2. You have the ability to think in terms of systems.
3. Finally, you have an accurate understanding of how culture works.
The shape of the expertise is like this: you design through a step-by-step unfolding. Over time, you design policies, shape culture, and nudge behaviour through meetings, one-on-ones, and actions.
The most important nuance is to recognise bad behaviours early and nip them in the bud by course-correcting.
It is useless to talk about the cobra effect, or about Goodwin's law, or about skin in the game if you don't have these skills pinned down.
In fact, it is not necessary.
Because you are able to model the behaviour of the humans you are dealing with, you understand the system you are modifying, and you understand how to mould culture, none of these issues will come up.
These are just frameworks that sound intelligent but are not useful.
"Wait, but Cedric, Goodwin's law is a thing! Other intelligent people talk about it!"
Yes, but the way you defeat Goodwin's law is not by talking about Goodwin's law. Bloom talks about Amazon in his tweet thread. Amazon actually deals with Goodwin's law quite well.
In Working Backwards, Colin Bryar and Bill Carr dedicated an entire chapter to how they think about metrics.
The number of times they mention Goodwin's Law: 0.
Amazon has a process they call DMAIC. The book tells the story of the step-by-step unfolding that led them to it.
Bryar and Carr are extremely believable, by the way. They were in the room when the 6-pager was designed, when the decentralised org in Amazon was built, and when Amazon's approach to metrics was still being built out.
DMAIC enables them to sidestep Goodwin's law.
Most people, with no org design background, would read Working Backwards as a 'manual of techniques to apply'.
Org designers read Working Backwards as an accounting of the step-by-step unfolding that LED Amazon to design those systems.
Novice org designers would read stories of misaligned incentives and go "Ha! What a bad idea! Use these frameworks!"
Experienced org designers would read stories of misaligned incentives and ask: "How did they get there and what processes did they try after that incident?"
I spend a lot of time talking about believability. I said that you should not pay attention to ideas from people who:
- Have not had at least 3 successes in the domain, and
- Have a coherent explanation when probed.
The instant I read Bloom's thread, I was like "ok, this seems ... off."
Every idea was correct.
Every idea sounded intelligent.
Every idea was also quite useless.
And the reason for that is because the WAY you use the ideas are not the way you might think they should be applied.
The shape of the expertise of org-design is a step-by-step unfolding. Not taking that into account is a novice mistake.
Why this is the case is a different thread for another day.
(The basic idea is that nobody can perfectly predict org response to incentives because orgs + org culture are somewhat complex and dynamic and adaptive. So you need to iterate to see how the system adapts).
Anyway: all of this is to say, be careful who you read. Make sure they are believable in the domain. Or you might be led astray by correct ideas that are just not useful because they've never been put to practice.
The end.
Ahh, god, I meant Goodhart's law. You would think I would have known about this, having written about it in the past.
Correction: I meant Goodhart's Law, not Goodwin's law.
I should've known better! And I say this as someone who summarised Mainheim and Garrabrant's "Categorizing Variants of Goodhart's Law" paper in the past!
You want to get good. You want to get good fast. How do you do this?
In 2008 and 2009 the US Department of Defence convened two meetings on this very topic.
Here's what they found. (Hint: the answer is NOT deliberate practice).
2/ First: let's put ourselves in the position of a training program designer.
You are told that you need to get a bunch of novices to a good level of competency in 3 months.
It used to take them a year.
How do you do this?
3/ If you're like most people, you'd probably say "Ok, let's create a skill tree. Let's map out all the skills needed from most basic to most advanced. Then let's design a syllabus, where complex skills build on simpler skills."
We talk about ways programmers harm themselves in their careers, mistakes non-technical people make when dealing with programmers, and what it was like pushing the boundaries of property testing.
Also, possibly the best piece of fiction you'll ever read about software testing (I know, I know, but truly, it's great): archiveofourown.org/works/3673335
1/ One of my most persistent irritations is with the whole 'OH YOU NEED TO DO DELIBERATE PRACTICE' meme.
Ugh, no, perhaps you don't. It depends on your domain. Deliberate practice has problems. Have you even tried?
I've written about this before, but here's a thread.
2/ First: DP is a real theory, and it's one of the greatest contributions to our understanding of expertise.
It is a technical term. It does NOT mean 'practicing deliberately'. We'll define it soon.
My problems with it stem mostly from trying to apply it, and failing miserably.
3/ Ok, let's define DP. To make things a little complicated, DP is tricky to define because K Anders Ericsson has been inconsistent with definitions throughout his career (see pic, from The Science of Expertise, Hambrick et al).
1/ I've been reflecting on why I found @LiaDiBello4's extracted mental model of business so compelling.
I mean, my reaction was mostly: "ALL great businesspeople share a common mental model of business? The model is a triad of supply, demand and capital? YES THIS MUST BE RIGHT."
1/ Yesterday I talked about Cognitive Transformation Theory, a learning theory that tells us that how good you are at learning from the real world depends on how good you are at UNLEARNING mental models.
2/ In 1993, Clark Chinn and William Brewer published a famous paper on how science students react to anomalous data — data that clashed with their mental models of the world.
They then drew on the history of science to show how common these reactions are amongst scientists.
3/ It turns out there are basically only 7 ways you can respond to inconvenient data. 6 of them allow you to preserve your existing mental models.
See if any of these are familiar to you, before we go through them in order:
US Military, Naturalistic Decision Making researchers: "in order to accelerate expertise, we need to design our training programs to destroy existing mental models"
Good businesspeople: "how can we distill wisdom from the air?"
Clarification on the 'distill wisdom from the air' bit — that's from Robert Kuok's biography, in reference to the way uneducated Chinese businessmen learn. Mostly by reflecting on experiences and observing widely.
There was a meme sometime back on “what is the deliberate practice of your domain?” With this theory of learning, we can say that the question is ill-formed, because DP can only be done in domains with clear pedagogical development, with a coach who has that pedagogy.