Docs are ROCs: A simple fix for a methodologically indefensible practice in medical AI studies.

Widely used methods to compare doctors to #AI models systematically underestimate doctors, making the AI look better than it is! We propose a solution.

lukeoakdenrayner.wordpress.com/2020/12/08/doc…

1/7
The most common method to estimate average human performance in #medical AI is to average sensitivity and specificity as if they are independent. They aren't though - they are inversely correlated on a curve.

The average points will *always* be inside the curve.

2/7
The only solution currently is to force doctors to rate images using confidence scores. While this works well in the few tasks where these scales are used in clinical practice, what does it mean to say you are 6/10 confident that there is a lung nodule?

3/7
Most clinical tasks have 2 (or 3) decision options.

Treat or don't. Biopsy or not.

Forcing doctors to do things that aren't part of their clinical practice is a terrible way to test their performance. We think if a task is binary, test the doctors that way.

4/7
So we suggest a simple off-the-shelf method: SROC analysis. Widely used in the meta-analysis of diagnostic accuracy, SROC is a well understood and validated way to summarise performance across diagnostic experiments.

For AI-human comparisons, each reader is an experiment.

5/7
We show how it works be re-evaluating several famous medical AI papers, for example Esteva et al on melanoma (below).

We think this is something everyone can do, and will improve the quality of reporting for AI vs human medical studies.

Check out the blog for more details.

6/7
As a quick final note: this doesn't only apply to medical AI studies. We often use similar methods in the radiology literature when we try to determine the accuracy of a test. The SROC approach applies equally well in normal diagnostic research.

7/7
PS better mention @PalmerLyle who coauthored the paper with me, had the original idea, and inspired my favourite self made gif ever.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Luke Oakden-Rayner

Luke Oakden-Rayner Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DrLukeOR

19 Aug
Alright, let's do this once last time. Predictions vs probabilities. What should we give doctors when we use #AI / #ML models for decision making or decision support?

#epitwitter

1/21
First, we need to ask: is there a difference?

This is a weird question, right? Of course there is! One is a categorical class prediction, the other is a continuous variable. Stats 101, amirite?

Well, no.

2/21
Let's set out the two ways that probabilities are supposed to be different than class predictions.

1) they are continuous, not categorical
2) they are probabilities, meaning the numbers reflects some truth about a patient group and are not arbitrary

Weeeeell...

3/21
Read 23 tweets
28 Jul
This discussion was getting long, so I thought I'd lay out my thoughts on a common argument: should models produce probabilities or decisions? Ie 32% chance of cancer vs "do a biopsy".

I favour the latter, because IMO it is both more useful and... more honest. IMO:

1/13
The argument against using a threshold to determine an action, at a basic level, seems to be:

1) you shouldn't discard information by turning a range of probabilities into a binary
2) probabilities are more useful at the clinical coalface

2/13
Re: 1.

No model discards information. The continuous output score always exists. It is how you make use of that information at point of care that "changes".

I use airquotes around "changes", because this is a ... false dichotomy 😆

3/13
Read 14 tweets
3 Mar
Great work showing that a good AI system doesn't always help doctors.

Echoes the decades of experience with radCAD: when the system is wrong, it biases the doctor and makes them *worse* (OR 0.33!) at diagnosis.

It is *never* as simple as AI+doctor is better than doctor alone.
I personally suspect the biggest problem is automation bias, which is where the human over-relies on the model output.

Similar to self driving cars where jumping to complete automation appears to be safer than partial automation.
But interestingly (and perhaps counter-intuitively) this could also mean that "blind" ensembling (where the human gets no AI input, and the human and AI opinions are combined algorithmically) might be better than showing the doctor what the AI thinks.
Read 6 tweets
26 Nov 19
#Medical #AI researchers: badly performed/described cross-validation is the most common reason I recommend major revisions as a reviewer.

CV can be used to tune models and to estimate performance, but not on the same data. See this diagram for doing both.

h/t 4 pic @weina_jin
@weina_jin The weird thing about CV in AI is that you don't actually end up with a single model. You end up with k different models and sets of hyperparameters.

It allows an estimate of generalisation for a *group* of models, but that is still a step removed from a deployable system.
@weina_jin For a more detailed explanation, see the "Nested cross-validation for model assessment" section of: ncbi.nlm.nih.gov/pmc/articles/P…

and here is the blog post from @weina_jin that reminded me to tweet about this topic weina.me/nested-cross-v…
Read 5 tweets
10 Sep 19
1/ While this will play well (and get cited a lot) among the anti-#deeplearning holdouts, I was left a bit underwhelmed. I wanted to find some interesting edge cases where DL is not working (so we can work out solutions), but instead got a set of pretty unreasonable comparisons
2/ The deep learning models are tiny (4 conv layers) with justification that it works for MNIST. Everything works for MNIST! Linear regression works for MNIST!

xiaoliangbai.com/2017/02/01/ten…

We know in complex images deeper and more complex is vastly better, and does less overfitting!
3/ The linear and non-deep models are not "apples to apples" either though. This isn't deep learning vs simple models, it is deep learning vs incredibly complex feature engineering built up over decades of research.
Read 14 tweets
18 Dec 18
Well, here is the 6 months later follow up on @Annals_Oncology paper by Haenssle et al, "Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists."

The paper claims "Most dermatologists were outperformed by the CNN", a bold statement. The relevant part of the paper is pictured.
I raised several concerns in those tweets:

1) they compared two different metrics (ROC-AUC vs ROC area) as if they were the same
2) they used average human performance
3) they seemed to cheat when picking an operating point for the model

Each biases in favour of the model.
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!