Slovakia (pop 5.5M) is attempting a mass COVID-19 screening campaign using rapid antigen tests. The public health community is going to learn a lot. Here's what I'm looking for... 1/
Slovakia, like Europe, is experiencing a rapid acceleration of infections & deaths, and is starting to use curfews & lockdowns.
A pilot phase tested 140K people with rapid antigen tests, found 5.5K positives (4%).
They'll test the nation over next 2 weekends! Good idea? 2/
First, there are reasonable critiques of rapid Ag tests related to their sensitivity—do they miss too many infections?—and their specificity—do they falsely tell uninfected people that they're positive?
Re sensitivity: every broken transmission chain is a victory, BUT...
3/
If a COVID+ person gets a false-negative test result, and then changes behavior—drops mask, goes to pub, etc—then the test result may have actually caused new infections!
The big Q: will screening break more transmission chains than it unintentionally creates? (I think so!)
4/
The question above could be addressed through modeling, but it could also be sidestepped via better communication: a negative rapid screening test should not be a passport to 2019. Screening is a way to filter out many (not all) asymptomatic or presymptomatic infections.
5/
In any case, if the Slovakian mass screening works, we should see a huge spike in cases from the screen, followed by a decline or slowing in cases thereafter due to cases averted.
@NoahHaber & ilk will have some ideas on how to do the causal epi properly. (Please, Noah?)
6/
What about false positives? If prevalence is low and specificity is imperfect, it's possibly for a large fraction (even a majority) of positives to be false positives.
Unnecessary isolation days from screening? Sounds bad, but not against a counterfactual lockdown.
7/
If false positive screening tests cause 1 in every 2 isolation days to be unnecessary, a lockdown at 1% prevalence causes 99 in 100 isolation days to be unnecessary.
False positive tests are a problem, to be sure, but the counterfactual matters.
8/
I'm looking forward to seeing the outcomes of Slovakia's mass screening using rapid antigen tests.
Some have argued rapid Ag tests could be a disaster, while others argue they could be a silver bullet. The proof will be in the epi curves. Let's see what the data say.
9/9
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Most of what we know about viral dynamics during SARS-CoV-2 infections comes from samples taken *after* symptom onset. From symptoms onward, viral loads slowly fadeaway.
What do viral loads look like between exposure and symptoms? 2/n
In this study, researchers in the NBA bubble recruited players, coaches, vendors, and others to sign up for a longitudinal study with regular COVID testing.
In other words, the researcher ran a classic pick-enroll-screen in the NBA bubble. 3/n
How does effective viral surveillance change when (1) some people refuse to participate, and (2) sample collection errors lead to lower sensitivity, indep. of a test's limit of detection? Questions raised by @jhuber@awyllie13 & others after I posted this preprint last week.👇 1/
I love twitter+preprints precisely because of this community. In the updated preprint, we've corrected a couple typos, and created a new supplement, "Adjustments for false negatives and test refusal" which I'll quickly summarize below. 2/ medrxiv.org/content/10.110…
Previously, we estimated the impact of a policy on R by measuring the "infectiousness" the testing, relative to no testing. The formula's values correspond to the heights of bars in plots like this one. f0 is the leftmost hatched bar. ftest is the total height of a policy bar. 3/
Preprint: Viral surveillance testing is crucial, but not all surveillance strategies are equal. We modeled the impacts of test frequency, assay limit of detection, test turnaround time, measuring impact on individuals & epidemics. Here's what we found. 1/ medrxiv.org/content/10.110…
The first finding is that limit of detection matters less than we thought. There is only short (1/2 day) window when qPCR is superior during the exp growth phase. We showed this in a simple viral load model, but any model with exp growth between Ct40 and Ct33 would confirm. 2/
So only a high-frequency testing scheme will take advantage of that short window. However, high-frequency testing schemes will have a high impact on the reproductive number, *regardless* of test LOD. ➡️ Ruling out higher LOD tests for surveillance purposes would be a mistake. 3/
My colleagues and I are formally seeking a retraction of the recently published “Identifying airborne transmission as the dominant route for the spread of COVID-19.” The full text of our letter to the PNAS editorial board can be found here. 1/ metrics.stanford.edu/PNAS%20retract…
It is important that science, especially now, be as rigorous and methodologically sound as possible. However, this paper suffers from numerous and fundamental errors that undermine the foundation of its conclusions. The paper is linked here. 2/ pnas.org/content/early/…
Masks help in the fight against COVID-19. Our call for one study to be retracted should not detract from that important message. Indeed, a recently published meta-analysis showed that mask use (N-95 esp), could result in large risk reduction. 3/ thelancet.com/journals/lance…
Sensitivity & specificity affect the inferences that we can draw from seroprevalence studies & inform the number of samples we need for statistical confidence. To help, we built two calculators. Calculator 1: survey data, se, sp → prevalence posterior. larremorelab.github.io/covid-calculat…
But let's also remember: sensitivity & specificity are *estimated from data*. That means that they, too, need statistical treatment. So for Calculator 2: survey data AND raw assay calibration data → posteriors for prevalence, sensitivity, and specificity. larremorelab.github.io/covid-calculat…
Calculator 2 is important because it shows that the way we calibrate our tests is as important as the survey data we collect. You can incorporate all sources of uncertainty together, learning about prevalence, but also about uncertainty in sensitivity & specificity. 🤔🤓😎
Earlier today, we put out a preprint that asked: how do we design and analyze SARS-CoV-2 seroprevalence surveys? @yhgrad wrote a lovely explainer thread, linked here.
First, the basics. This paper's 1st result is like a statistical inference midterm problem: If you observe n+ positive tests, n- negative tests, and you know the sensitivity/specificity of your test, what is the posterior prob. of actual positives? Solution: Bayes' rule. ✅
Now a practical problem: The posterior looks like a binomial posterior, but due to sensitivity/specificity, we end up with incomplete beta functions & small things raised to n+ and n- powers & can't invert the CDF. Solution: take logs and use accept-reject algm to sample.✅ 3/