Your rapid test is positive. Does that mean you have covid?
Here's the math you need to figure it out. A thread. 🧡
Let's define some terms that you might have heard of.
A "true positive" means the test result was positive and the person tested does indeed have covid.
A "false positive" means the test was positive but the person doesn't have covid.
A "true negative" means the test was negative and the person tested doesn't have covid.
A "false negative" means the test was negative but the person does have covid.
Got it? Good!
The "sensitivity" of a test is the percent of cases that test positive out of the total number of infected people in the group being tested.

The "specificity" of a test is the proportion of uninfected people who were tested and who were negative according to the test.
Finally, the "prevalence" is the proportion of people who are infected.
None of those numbers are really what we want to know though.
What we actually care about is called the "positive predictive value" of the test. It's the proportion of people who *test positive* who actually have the disease.
When you test a lot of people, positive results come from two sources: false positives and true positives.

The false positives depend on how many people don't have covid. For instance, if everyone was infected then it would be logically impossible to get a false positive.
Similarly, the number of true positives depends on how many people actually have covid. If nobody has covid then you can't logically have a true positive either.
It takes a bit of thinking to see this but the total number of true positives is just the sensitivity times the prevalence and the total number of false positives is just (1-specificity) times (1-prevalence), which is the number of uninfected people.
Putting everything together with the previous formula for positive predictive value, the probability of being positive given you have tested positive is given by the following formula:
This idea is a little counterintuitive because how well the test detects positives and negatives is a property of the test. But that's not really what we want to know.
Instead, we want to know the probability that you have covid after getting a positive test result.
That number is related to the number of people who actually have covid in the first place.

The more likely people around you are to have covid, the more confident you should be that you have covid if you get a positive test.
Let me repeat that idea again. A positive test result is a BETTER predictor when more people have covid. As covid surges due to omicron, positive rapid tests are actually getting *more* believable.

OK. Let's try using the PPV formula with some real numbers.
First, we need to know some actual test sensitivities and specificities. I got this list of test sensitivities and specificities from a very informative NYT article by @helloellenlee and @tracyvence which you can find here: nytimes.com/wirecutter/rev…
The Abbott BinaxNow test is the most common one where I live so I'm going to use it as an example. If you have a different test, feel free to plug in your own numbers for sensitivity and specificity into the formula. Using the BinaxNow numbers, I got...
Look at how the positive predictive value of the test goes up as covid becomes more common. It crosses 50% (more likely to be a true positive than not) at about 1.7% prevalence.
If you don't know prevalence, you can use test positivity for your area instead. In my state MA, it's around 5.6%. Here's a CDC link to test positivity rates by state and county for the US: covid.cdc.gov/covid-data-tra…
Test positivity is probably an overestimate of prevalence for the overall population but since you're concerned that you might have covid, you're probably in a similar risk category to the types of people that are being tested at your local healthcare provider.
Hope this thread gives you some tools to help you think about any positive tests that may come up. Happy Holidays and may all your tests be negative (or at the very least false positives)!
Disclaimer: This thread is not official advice. I'm not an MD. I do have a graduate degree in biostatistics but I'm in no way attempting to give you medical advice or contradict the advice of your local public health authority.

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with πŸ”₯ Kareem Carr πŸ”₯

πŸ”₯ Kareem Carr πŸ”₯ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @kareem_carr

23 Dec
Imagine somebody tweets that they think men on dating apps are jerks.
A commenter asks, "Do you have any peer-reviewed publications to back up your claim?"
The tweeter says they don't.
The commenter then accuses them of knowingly spreading misinformation.
I call this kind of thing a "claim escalation" and I think it's usually a jerk move. The original person tweets something that we all know to take with a pinch of salt. But responders pretend the the tweeter's claim is more than it is as a way of silencing their perspective.
Here's another example. Imagine somebody says that when they have a tummy ache, they find that warm soup stock often helps. If this person has no medical basis for this claim, would we be justified in calling them out for spreading potentially dangerous medical misinformation?
Read 5 tweets
10 Sep
How did I get this poll with almost 29k responses to balance perfectly? A thread. πŸ‘‡
Assuming most people didn't secretly flip a coin, where's the randomness in the poll coming from? I think it comes from three sources:
1. Some folks were genuinely picking randomly

2. Based on the comments, even for folks who used a system, the method they used was very unique to them and therefore really random relative to other people
Read 19 tweets
8 Sep
Here’s the result of yesterday’s statistics experiment!

The poll is significantly πŸ˜‰ biased!

WHY???

A thread.πŸ‘‡ Image
Here’s my plot of the responses as they came in.
With 7291 responses, this is *really* baised. The chances of it being a β€œfair coin flip” are basically 0. πŸ˜‚ What’s going on? Image
As a good data scientists, we can use our qualitative data to help us understand our quantitative data! What qualitative data? The comments! Apparently, some folks tried to think one step ahead of the other respondents.
Read 6 tweets
13 Aug
THINK LIKE A DATA SCIENTIST:

Probability is hard because counting is hard.

A thread. πŸ‘‡
For a lot of people, mathematics is true in the same way that "Kermit The Frog and Miss Piggy are a couple" is true. It's true in an imaginary world where we have agreed upon rules. If that's how you think about math then it's pretty obvious that "2+2=4".
To me, "2+2=4" means that "2 things + 2 things will always be 4 things no matter what the things are". Turns out this is not technically true. You can create all kinds of mathematical systems and physical situations where 2 things + 2 things is not 4 things.
Read 24 tweets
12 Aug
Just found this. Not sure if @michaelshermer is confusing @nhannahjones with me or somebody else because I never said most of that stuff either. What I will say is I learned from my (mostly white) grad school professors how to construct mathematical systems where 2+2 isn't 4.
If that seems contrary to reason to you then I humbly suggest that maybe you don't understand reason as well as you think you do. I know many of us probably learned in grade school that 2+2=4 but the relevant context is it's basic math that they teach to kids.
My race seems to suggest to people that this is a race thing somehow. It's not. Check out the link for a PhD who's not black and who also agrees that 2+2 is not always 4. As Dr. Hossenfelder puts it, "It's not woke. It's math."
Read 4 tweets
6 Aug
THINK LIKE A DATA SCIENTIST:

Are you frustrated with how organizations like the CDC and the WHO are handling the pandemic? Do you wish they did a better job of following the data?

If so, read on... πŸ‘‡
One of the earliest lessons of the pandemic was covid outbreaks can get really bad really quickly. While the costs of over-responding are easy to predict like unnecessary financial losses and physical discomfort, the costs of under-responding are harder.
Some areas got away with relatively small outbreaks. Others experienced tremendous disruptions to their healthcare system and significant losses of life.
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(