Based on S-Gene Target Failure (SGTF), we estimated #Omicron growth rate across RSA provinces. Gauteng has the clearest signal, and indicates growth of 0.21 (0.14, 0.27)/day, corresponding to a doubling time of 3.3 (2.5, 4.6) days.
From that, we estimated relative effective reproductive #s for #Omicron & #Delta. In Gauteng, the ratio is ~2.4 (2.0, 2.7), assuming #Omicron has the same typical time between generations.
[n.b. this corrects fig from pres.; no change to other results] #NotYetPeerReviewed
Using next generation matrix approx of Gauteng, incorporating accumulated immunity due to vaccination and infection, we can constrain the mix of transmissibility & immune escape for #Omicron, under various plausible scenarios.
Now peered reviewed at @IntJEpidemiol: challenges (& a solution) for the test-negative study design when data are gathered during public health response in a population with clustered vaccine uptake (work w/ @TJHladish, W. John Edmunds, & @rozeggo)
Briefly, this design is biased when mixing different testing data streams-e.g. tests for sick people & tests for *contacts* of sick people. That bias is exacerbated when vaccine coverage is homophilous-i.e. if you're vaccinated, likely your contacts are too & vice versa.
This work focuses on Ebola outbreaks, where e.g. vaccine coverage might be logisitically constrainted to particular areas.
But, it's likely applicable to COVID-19: lots of testing, but limited documentation of why, and pro/anti-vax social grouping (& age-based, during rollout).
We calibrated a model used in a lot of @cmmid_lshtm COVID-19 work to the South African outbreak & interventions. In this model, to explain the increasing epidemic, 501Y.V2 needs to be either more transmissible or evading cross-protection.
NB: NOT PEER REVIEWED
If there is cross-protection against 501Y.V2, it's roughly 50% more transmissible. If it has the same transmissibility, then prior infection only confers 80% protection against 501Y.V2.
Submitted (finally) revisions to this analysis of using the test-negative study design in the presence of public health measures. Some new thoughts, in light of pandemic!
Work w/ @rozeggo, @TJHladish, & John Edmunds.
When criteria for testing vary systematically, the design can be biased. In outbreak & pandemic response, testing happens passively (e.g. you're sick => seek care => get tested) & actively (e.g. co-worker is test+ => contact-tracing => you get tested).
NB: NOT PEER REVIEWED
These represent diff risks and thus differ in thresholds for testing. e.g. in passive route, generic risk => testing conditional on symptoms, but for active scenario => high exposure risk => unconditional testing (e.g. COVID19 TTI protocols).
We forecast dates that African countries would *report* 1K and 10K cases. We used very sparse data (first 25 or fewer cases prior to 23 March), w/ deliberately low-detail method (branching processes), assuming global epi estimates & discounting any potential interventions.
That very simple approach worked well for countries that couldn't (or didn't) respond to early warnings like ours. My previous tweets about this report are from 27 March; since then 12 countries have reported >1K cases: who.int/docs/default-s…