Very excited to share our updated preprint on pooled testing for SARS-CoV-2 surveillance. This has been a fantastic modeling and lab collaboration with @BrianCleary, @michaelmina_lab and Aviv Regev, and it’s all about our favorite topic: viral loads. 1/16

medrxiv.org/content/10.110…
Highlights:
-PCR sensitivity and efficiency are linked to epidemic dynamics and viral kinetics
-Prevalence estimation without testing individual samples using a few dozen tests
-Simple (by hand) strategies optimized for resource-constrained settings

Full story below. 2/16
We (the world) still need more testing. The number of test kits is still limited in a lot of places, meaning that we are missing a lot of infections, not testing regularly, and are flying blind wrt population prevalence. Pooling has been discussed as part of the solution. 3/16
Pooling is simple – mix samples together and test the pool. If -ve, assume all constituent samples were negative and stop. If +ve, re-test the original samples to find the positives. When most pools test -ve, we use fewer tests than needed to test each individual sample. 4/16
But there are a few key concerns with pooling: 1) we might miss low-viral load samples through dilution effects, 2) efficiency changes depending on prevalence, and 3) complicated pooling strategies are more logistical hassle than they’re worth, particularly without robots. 5/16
A lot of papers/preprints have investigated these issues and propose clever pooling algorithms. But there is a missing component: viral loads vary over the course of infection, between individuals and over the epidemic*, described brilliantly here: nytimes.com/interactive/20…
6/16
* This is a really interesting point, but a story for a different thread.
To model this, we fit a random-effects viral kinetics model to time-series viral loads. We then used an SEIR model to simulate loads of infections and individual-level viral load curves. This gave us a synthetic population of viral loads to test pooling strategies. 7/16
For prevalence estimation, we used a statistical method developed for HIV. You write down a likelihood for the viral load observed in a pool *given* prevalence, and use this method to get a maximum likelihood prevalence estimate based on the viral loads of your tested pools. 8/16
With enough samples (if prevalence is low, need more samples to capture >0 positives), the method gives accurate estimates of prevalence using <=48 tests. We checked this in the lab – we were able to estimate 1% prevalence amongst ~2000 samples using only 48 tests! 9/16
For individual testing, we identified pooling strategies that were a) simple to carry out, b) would remain efficient even if prevalence changed due to epidemic dynamics, and c) maximized the number of positive samples identified when testing capacity is limited. 10/16
We did exactly the same thing comparing the growth and decline phases of the epidemic (sensitivity is generally lower during epidemic decline, because more samples have low viral loads), and also using simulations based on sputum samples. 11/16
Because we knew the infection status and true viral loads of our entire population, we could look at sensitivity depending on when during an infection someone is sampled. Unsurprisingly, most false negatives come from samples in the tail end of their infection. 12/16
Finally, we tested our theoretical predictions in the lab using discarded nasopharyngeal swabs. Pooling samples of varying viral loads gave results consistent with expected dilution effects, and we showed that simple pooling led to efficiency gains in line with expectation. 13/16
Take home: simple protocols that can stay unchanged for weeks drastically increase the number of +ves identified. Loss of sensitivity plays out as expected with dilution effects – most missed samples are from low-viral load individuals, mostly at the end of their infection. 14/16
Whether to pool or not depends on your question and setting. If test kits are limited, you will likely identify more +ves overall by pooling, and the small sensitivity loss may therefore be tolerable. Settings aside a few tests for prevalence estimation is also a good idea. 15/16
Lower sensitivity may not be tolerable in a clinical setting, so whether to pool or not depends on how the test result will change triaging. But for surveillance and sheer throughput, simple pooling is the way! 16/16
Huge thanks to all those involved in the study: Brendan Blumentstiel, Maegen Harden, Michelle Cipicchio, Jon Bezney, Brooke Simonton, David Hong, @m_senghore, @DocKarim221 and Stacey Gabriel.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with James Hay

James Hay Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jameshay218

27 Mar
The lack of clinical COVID19 cases in children is odd. Understanding why will be essential in deciding which social interventions are most useful. We discussed this alongside a modeling analysis here: dash.harvard.edu/handle/1/42639…
@DrDJHaw @BillHanage @CJEMetcalf @michaelmina_lab
We propose 4 possible explanations for the lack of cases in children:

1) Kids haven’t been making as many contacts as normal. This may contribute, but probably isn’t the only factor: medrxiv.org/content/10.110…
2) Children are less susceptible to *infection* or adults are more susceptible. Seems unlikely now, given secondary attack rates from contact tracing, though conflicting findings: medrxiv.org/content/10.110… + previous
Read 10 tweets
25 Mar
A recent analysis from Oxford presented a range of model scenarios consistent with observed COVID death counts. I’m going to reproduce their analysis here and then present some slight modifications to provide a conservative (if technical) perspective. (gonna be 15ish tweets)
They showed you can estimate the same number of deaths with either a high % of the population at risk of severe disease and a recent epidemic start, or a low % and an earlier start. Some media outlets have reported this as suggesting “majority of the UK has already been infected”
But that’s *not* what the authors were trying to say. The aim was: given what we do know about the virus, let’s test different assumptions for the stuff we don’t know and see which tests are consistent with observed death counts.
Read 20 tweets
7 Mar
Total isolation isn’t the only way to reduce #coronavirus spread. Reducing unnecessary contacts (only invite the best people to your birthday party) helps. Based on school maths: if you roll a die 10 times, the probability of 1 or more 6s is 84%. If you roll it twice, it’s 31%.
This also applies if you reduce the nature of your contacts. For example, if you stop licking your friend’s face, maybe you go from a 6-sided die to a 20-sided die. Then the probability of at least 1 20 at your 2-mate birthday party goes down to 10%.
Even partial reduction goes a long way to reducing transmission below a critical threshold. R0 = prob of transmission on contact * infectious duration * contact rate. If we have R0 = 2 = 0.04*5*10, then halving the contact rate gets us to 0.04*5*5 = 1.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!