The news about Pfizer's adolescent trial is excellent. As some debate whether we have enough data to reliably estimate vaccine efficacy in this subgroup, some important context is that efficacy was not even the primary outcome of the trial. 1/4 statnews.com/2021/03/31/pfi…
The 12-15 subgroup was comparatively small (an extension of the existing trial), and the focus was to measure safety and immunogenicity, although efficacy data was also collected. Though we don't have full details, the trial was unlikely powered for efficacy. 2/4
When we think about bridging a known efficacious vaccine to a new (here, younger) population, the bar for evidence is lower. Clearly we need high quality safety data. For immune response data, we see even higher antibody responses in adolescents than adults. 3/4
But these data are coupled with our existing understanding that the vaccine works well in adults 16+. So the strong 18 vs. 0 case split serves to confirm/replicate an existing result, and does not have to stand alone as a new result. 4/4
Addendum: Here is the incredibly complicated clinical trials listing for the Phase 1/2/3 trial. Fingers crossed I am communicating this all accurately. clinicaltrials.gov/ct2/show/NCT04…
As an example that is easier to see, here is the clinical trials listing for Moderna's Teen Cove study. The primary outcomes are related to safety and immunogenicity. Efficacy is defined as a secondary outcome. clinicaltrials.gov/ct2/show/NCT04…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Early on in the pandemic, I tweeted about the need to triangulate results across diverse sources. Vaccine efficacy trials were an exception, being high quality and randomized. With vaccine effectiveness studies based on real-world data, we move back into the first category. 1/3
An advantage is that we can compare real-world insights against the trials themselves, although we may not have had enough trial data to answer certain questions. We also have a sense of biological plausibility, like that vaccines take time to start working. 2/3
But result from observational studies should not be taken at face value the way trial results are. Our understanding will build over time as results are replicated. We also assess the quality of the study design used (and not only the speed at which it is published). 3/3
There are different ways to measure test positivity that result in modest differences. (Ignore the blue line of only people who have never been tested before, which is less useful at this stage of the pandemic.) Tracking trends is more useful than focusing on a raw number.
Out today -- my piece in @nature on a topic I feel very strongly about = the need to coordinate and harmonize observational vaccine studies! If you think it is hard to compare vaccine trials, you haven't seen anything yet.
A thread. 1/6 nature.com/articles/d4158…
With randomized vaccine trials becoming increasingly challenging to conduct, we will rely on observational studies to guide important policy decisions. For example, these studies can help us understand the durability of vaccines or how well they work against new variants. 2/6
We can easily expect hundreds of separate observational vaccine studies, some conducted at only a single site. Each using different endpoints, covariates, eligibility, and so on. It will be extraordinarily hard to sort through these differences in meta-analyses. 3/6
THINK LIKE AN EPIDEMIOLOGIST: Lately I have been asked why we are seeing a dramatic turnaround in cases in the US. Is it vaccines? Herd immunity? An artifact due to a drop in testing? Behavior change? Weather?? A few tweets about how I step through this question. 1/6
To start, I look to see whether the drop is an artifact. While testing has dropped somewhat, it's not enough to explain the rapid drop in cases. A drop in testing would also not explain the drop in hospitalizations that is consistent across regions. 2/6 covidtracking.com/data/charts
I then consider the similarity of the drop across locations, looking by subregion and by state. The similar patterns seen are relevant because different places have different vaccination coverage, levels of acquired immunity, and weather. Why the turnaround at a similar time? 3/6
How do we measure how well the flu vaccine works every year? We use an observational study called the "test negative design." A few tweets on how it works, and why it will be a big part of ongoing COVID-19 vaccine evaluation. 1/8
Figure source: Fukushima et al. (2017) Vaccine
In the test negative design, individuals with disease symptoms seek healthcare and testing. If they test positive, they are TEST POSITIVE CASES. If they test negative (and their symptoms are caused by something else), they are TEST NEGATIVE CONTROLS. 2/8
We can then look back and see how many of the test positive cases were vaccinated, and how many of the test negative controls were unvaccinated. Where the vaccine works well, there won't be many vaccinated people testing positive. 3/8
Warning - some in-the-weeds tweets about vaccine efficacy trials, new strains, and decision making under uncertainty. I offer more questions than answers, but hopefully it can generate some discussion...
1/6
For COVID vaccine studies, we can imagine two goals: (1) We aim to measure efficacy precisely (minimize uncertainty, regardless of the true efficacy), or (2) We simplify our goal and try to measure if the vaccine is doing well enough - is efficacy above our success threshold? 2/6
This is relevant for discussions about how well vaccines are performing against different variants. For those who are already vaccinated, we want to know how well the vaccine works against a new strain. What is the efficacy, even if it is lower? This is like the first goal. 3/6