This is one of the most fascinating podcasts I’ve listened to in awhile. (h/t @chr_iswong)
Core idea: cancer in the body is a complex adaptive system. Ecology is a complex adaptive system. Why can’t we use ecosystem modeling techniques to model cancer/immune system dynamics?
How is this useful? Well, take an ecological model of pests and pesticides.
Let’s say you’re a farmer. You hate moths. You dump a huge amount of the best pesticides on your field, killing as many moths as possible.
Congratulations: you’ve just selected for resistance.
Do this a couple hundred times, with a dozen different pesticides, and you get the diamondback moth, which is basically resistant to everything.
So: not a good idea.
What you do instead is you try and spray pesticides over 3/4 of your fields, and you keep 1/4 of your field free of pesticides. So you cut down on the large bulk of your pests, but you leave enough of the non-resistant moths to let them compete against the resistant moths.
The intuition here is that resistance incurs a cost, so if you let enough non-resistant moths survive, they should outcompete the resistant moths when you have a non-pesticide-filled field.
So what does this have to do with cancer? Well, imagine that the cancer cells are the moths and the drugs you use in chemotherapy are your pesticides.
What if you used the same idea? That is, you try not to pump the body full of chemo until you get resistance, but taper it?
“And what we found was … if you had 3 cycles (of chemotherapy), at that sweet spot, the resistant cells progressively went to extinction.”
Of course, the next logical idea Dr Gatenby had was, hmm, what if you went and looked at extinction models and tried to replicate that?
So of course that’s what he did.
I mean, I think the really interesting thing here is just the modelling approach. Steal techniques from ecological modelling and see what it can tell you about possible treatment approaches.
“In cancer we’re always looking for the magic bullet, the one that will kill the cancer but not the normal cells, but maybe we don’t need a magic bullet, all we need is a series of good bullets.”
Because I’m THAT kind of person, I’m already thinking about how this applies to business.
An ecology of firms that feed on some market opportunity is a very fun mental image to have. What ecological models might apply?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Another day, another funny "I'm building a company to extract tacit knowledge from the heads of your company's experts will you please take a look" (but I've not read anything from the past 30 years of successful military + industrial methods doing tacit knowledge elicitation.)
If you're serious about this, you shouldn't be reading Commoncog essays.
You should be reading the research papers directly, and reaching out to those researchers, and ask them questions about past projects, and grappling with the problems they've identified.
You also shouldn't be opining on Twitter. Literally none of those folks are here. (Except maybe @perigean — bless him).
I want to call out an example of some remarkable thinking that I've had the privilege of observing up close.
About 2 years ago, @vaughn_tan started a project to come up with better thinking around 'uncertainty'. This MIGHT be important to business! MIGHT! But I was unconvinced.
Vaughn had noticed that our collective ability to deal with uncertainty was compromised by bad language. Because we do not have good language for uncertainty, we are forced to borrow words and concepts from risk management.
But this is bad: risk is VERY diff from uncertainty!
I was in good company in my scepticism, though. Vaughn's friend, the notable VC Jerry Neumann, told him that he was sceptical Vaughn's project would be very useful.
Neumann argued that it wasn't important to know what types of uncertainty exist — merely how to use it.
I once had an intern do an internship with me because she wanted to see how I approached 'startup things'. At the end of the summer, she was surprised that I didn't have a set of hypotheses to test.
"Doesn't this go against the data-driven approach you talked about?" she asked.
I didn't have the language for it then, but I think I do now.
When an initiative / product / project is too new, there is too much uncertainty to form useful hypotheses.
Instead, what you want to do is to just "throw shit at the wall and see what sticks."
This sounds woefully inefficient, but it's not, not really. A slightly more palatable frame for this is "take action to generate information."
But what kind of information?
Actually I was looking for answers to the following four questions:
A gentle reminder that if you want to speed up expertise intuition, you will do a lot better if you have an actual mental model of what expert intuition *is*.
The most useful model is the one below:
It gives you more handles on how to improve.
The name of the model is the 'recognition primed decision making' model, or RPD.
The basic idea is simple: when an expert looks at a situation, they generate four things automatically:
1. Cues 2. Expectancies 3. Possible goals 4. An action script.
You can target each.
For instance, if you're a software engineer and you want to get better from the tacit knowledge of the senior programmers around you, ask:
- What cues did you notice?
- What were your expectancies?
- What was your action script?