A new preprint has posted online assessing potential safety signals for pregnancy loss after mRNA vaccination during pregnancy using Israeli electronic health records (EHR data), with first author @joshg99 and senior author @RetsefL (newly appointed member of CDC's ACIP).
Their primary conclusions were that pregnant women vaccinated for COVID in the second half of the first trimester (wk8-13) had greater observed pregnancy loss rates than expected from the pre-pandemic regression model computable from EHR.
They also found that pregnant women vaccinated for COVID in the second trimester (wk14-27) and pregnant women vaccinated for flu from wk8-13 or wk14-27 had significantly lower-than-expected observed pregnancy loss rates during the pandemic,
and that women vaccinated for COVID or flu before pregnancy had slightly lower-than-expected pregnancy loss rates. They attribute these results to residual confounding (i.e. healthy vaccinee effect HVE)
The paper is exceptionally well written and introduces a rigorous approach for identifying potential safety signals from EHR data, an active reporting approach that avoids the key limitations of passive reporting systems (like VAERs in USA, AEFI in Israel): (1) reporting bias and (2) the lack of control group.(poorly understood limitations of these systems that I have harped upon ad nauseum on social media)
However, the paper has some key omissions that limit the ability to carefully interpret the results, including: 1. Failure to investigate and fully account for pandemic-related calendar time varying confounders. 2. Lack of assessment of whether women remaining unvaccinated throughout pregnancy during the pandemic had higher- or lower-than expected pregnancy loss. 3. No assessment of whether COVID-19 infections later in pregnancy were factors in post-vaccination pregnancy losses, especially for the primary cohorts. 4. Incomplete summary of results of vaccination before pregnancy 5. Lack of assessment of whether women vaccinated before week 8 of their pregnancy had higher- or lower-than-expected pregnancy loss. 6. Lack of summary of which type of pregnancy loss outcomes (spontaneous abortion, induced abortion, stillbirth) dominated the events for each modeled cohort.
The authors’ observed-expected analysis approach could be readily applied to perform each of these suggested analyses.
Given that the primary vaccinated subgroups driving their conclusions are very small (e.g. 1st dose wk8-13 cohort being 1.9% of pregnancies and 1.8% of pregnancy losses) and with pregnancies during specific times during the pandemic (e.g. ~90% of 1st dose wk8-13 having last menstrual period (LMP) between 10/2020 and 1/2021), there is concern for remaining residual confounding in these cohorts, from pandemic-related or medically-related confounders, and these analyses could shed more light on whether this concern is significant or not.
The inclusion of these results would provide a more complete and transparent picture of potential COVID vaccine effects on pregnancy loss.
It is not valid to dismiss any results showing lower-than-expected pregnancy loss in vaccinated subgroups as driven by residual confounding (by claiming HVE) without acknowledging that the higher-than-expected pregnancy loss in one small vaccinated subgroup (covid vaccinated wk8-13) could similarly be driven by residual confounding.
This thread will walk through the key details of their study and results, and elaborate some on these concerns.
This paper (see link below) sets out to investigate whether there is a potential safety signal for pregnancy loss when vaccinated pregnant women for COVID-19 during the pandemic.
Their study is based on electronic health record (EHR) data from Maccabbi, one of four HMO’s in Israel, accounting for 26% of the Israeli population.
As mentioned above, this qualifies as an active reporting system study of vaccine safety that is much more rigorous than passive reporting systems (e.g. VAERs in USA or AEFI in Israel), since the EHR contains events for control groups and does not have the same highly variable reporting bias.
Their endpoint of interest was pregnancy loss, which includes 3 outcomes: 1. Spontaneous abortion 2. Induced abortion (elective or medically-indicated) 3. Stillbirth.
All of their analysis were done on the pooled endpoint, not split out by each outcome.
They mentioned that the EHR lacked evidence for whether induced abortions were elective or medically indicated.
Their primary analysis involved investigation of four vaccination cohorts based on two time periods (second half of first trimester, wk8-13; and second trimester, wk14-27) and 2 vaccination doses (1st dose, 3rd dose):
1. Given 1st dose COVID-19 vaccine during the pandemic while pregnant at gestational age wk8-13.
2. Given 1st dose COVID-19 vaccine during the pandemic while pregnant at gestational age wk14-27.
3. Given 3rd dose COVID-19 vaccine during the pandemic while pregnant at gestational age wk8-13.
4. Given 3rd dose COVID-19 vaccine during the pandemic while pregnant at gestational age wk14-27.
Their analysis considered all pregnancies documented in the EHR with last menstrual period (LMP) dates between March 1, 2020 and February 28, 2022.
Their primary comparison involved comparison of the observed event rates in these cohorts with expected event rates computed from a historical pre-pandemic cohort.
Specifically, they built a logistic regression model based on pregnancies in the EHR with LMPs between March 1, 2016 and February 28, 2018, to compute risk of pregnancy loss demographic and clinical factors available in the EHR including age, socioeconomic status, past flu vaccination, co-morbidities (0, 1-3, 4+), district of residency, and ethno-religious sector, and calendar month of last LMP.
They also included gestation week in their model to account for the fact that probability of pregnancy loss is greatest at early gestational ages and decreases throughout the pregnancy, and this allows them to assess the probability of pregnancy loss as a function of current gestational time.
This is a critical and unique aspect of their modeling that gives their approach significant advantages over most other papers studying pregnancy loss.
To compute the “expected” baseline number of pregnancy losses for a particular cohort of individuals, they simply compute the probability of pregnancy loss for each individual in the cohort based on their gestational age and covariates from the EHR, and sum over all individuals.
By comparing the observed number of pregnancy losses, with the expected, one can assess whether the observed pregnancy losses are greater or lower than expected, with significantly greater number indicating a potential safety signal.
They present absolute differences of observed-expected in the main paper and present relative observed/expected ratios in the supplement (Table S6).
Here are their primary results.
They found:
1. Women given 1st dose of COVID vaccine in 2nd half of 1sttrimester (wk8-13) had greater-than-expected pregnancy loss
2. Women given 3rd dose of COVID vaccine in 2nd half of 1st trimester (wk8-13) had greater-than expected pregnancy loss
3. Women given 1st dose of COVID vaccine in 2nd trimester (wk14-27) had lower-than-expected pregnancy loss
4. Women given 3rd dose of COVID vaccine in 2nd trimester (wk14-27) had lower-than-expected pregnancy loss.
They considered the wk8-13 results a potential safety signal, but dismissed the wk14-27 results as resulting from healthy vaccinee effect (HVE), a specific type of residual (unmeasured) confounding.
They also computed the observed and expected pregnancy losses by gestation week of vaccination between weeks 8 and 27.
Note that for the 1st COVID vaccinated, the observed is generally greater than expected for most weeks between wk8 and wk13, and at the lower end of expected or lower than expected for most weeks between wk14 and wk27.
Since there was no pre-specified protocol or analysis plan shared, it is not clear whether they prespecified the wk8-13 and wk14-27 cohorts or whether they grouped the weeks with similar effects in the same cohorts (which would raise some concerns about hidden multiplicities)
For the 3rd COVID vaccinated, they observed higher than expected observed pregnancy losses for weeks 9, 12, and 13, and at the lower end of expected or lower than expected for weeks 19, 20, and 22-24.
To assess whether the greater-than-expected pregnancy losses in the wk8-13 vaccinated groups were driven by earlier or later pregnancy losses, they separately computed the observed and expected pregnancy losses 1. After week 14 2. After week 20 3. After week 25.
They found that both early and later-pregnancy losses were evident in the analysis.
The historical baseline model provides a confounder-adjusted control against which to compare the observed pregnancy losses for the vaccinated cohorts.
However, note that the “expected” baseline computed from pre-pandemic historical controls do not adjust for any pandemic-level confounders.
Thus, any pandemic factors potentially increasing risk of pregnancy loss, including COVID-19 infections, lockdowns, reduced activity levels, reduced healthcare access or utilization, or high stress, could also result in a greater-than-expected pregnancy loss, but are not accounted for in the modeling.
Additionally, the historical baseline risk model only incorporated limited medical variables on a coarse scale (e.g. 0, 1-3, 4+ comorbidities without distinguishing minor from major comorbidities), which further highlights another potential source of residual confounding.
As a result, the use of the historical control does not eliminate the risk of residual confounding, whether from pandemic-related or personal medical factors.
Residual confounding between vaccinated and unvaccinated groups is often referred to as “HVE”, but of course that is not the only source of residual confounding.
Thus, one should not automatically dismiss any lower-than-expected results in a vaccinated cohort as an artifact of HVE, while failing to acknowledge that any higher-than-expected result in a vaccinated cohort might also be driven by residual confounding.
For this reason, it is important to consider other control analyses to assess potential residual confounding, and carefully consider all of them when evaluating results.
One control group they included was flu-vaccinated women during the pandemic.
They constructed cohorts of women given influenza vaccine during the pandemic 1. Wk8-13 (2nd half of 1sttrimester) 2. Wk14-27 (2nd trimester).
They computed the same observed-expected analysis as for the primary cohorts, and presented the results.
They found lower-than-expected pregnancy loss for both the Wk8-13 and Wk14-27 cohorts, so did not see the higher-than-expected results that was obtained for the Wk8-13 COVID-19 cohorts.
They repeated the analysis for flu-vaccinated at gestational age wk8-13 and wk14-27 in a pre-pandemic cohort (March 1, 2018-February 29, 2019), and similarly found lower-than-expected pregnancy loss in the flu-vaccinated cohorts.
They considered these results artifacts of HVE.
Since they were pregnant during the pandemic, this cohort shares some of the same pandemic-level factors as the covid-vaccinated cohorts, and thus provides some evidence the covid vaccinated results were not simply functions of broad pandemic-level confounding.
However, given the different timing of the covid and flu vaccinations during the pandemic, and the potentially different medical profiles of these cohorts, it is still possible that there are unmeasured confounders between the covid and flu vaccinated cohorts.
Consideration of other control groups to assess pandemic-related calendar time confounders, including unvaccinated controls during the pandemic, could have been done and provided a more complete picture of this concern.
1. Unmeasured pandemic-related calendar time confounders: Their use of pre-pandemic historical controls means that, in their observed-expected analysis, any pandemic-related factors potentially affecting pregnancy loss are confounded with vaccination status.
It is likely that these potential confounding factors, which could include factors such as lockdowns, inactivity, reduced health care access and utilization, stress, and covid exposures and infections, vary by calendar time in the pandemic, so could be manifest as a calendar-time bias.
Their inclusion of flu-vaccinated controls during the pandemic partially addresses this concern, but given the flu and covid vaccinations were done during different times of the pandemic, there could still be substantial residual confounding.
Given that the primary cohort driving the author's conclusions, women COVID vaccinated during gestational weeks 8-13, are such small cohorts, there is significant potential for unmeasured confounding to be a major factor.
In particular, the cohort given 1st COVID dose wk8-13 included only 1837/94,351 = 1.9% of pregnancies, and 240/13,124 = 1.8% of pregnancy losses, and ~90% of these pregnancies had LMD between October 2020 and January 2021 (Figure S5), a specific time period with particular factors (lockdowns, Delta variant covid exposure late in pregnancy)
Further investigation of these pandemic-related calendar time effects is warranted, in the 1st dose covid-vaccinated, 3rd dose vaccinated, and flu-vaccinated cohorts (which had substantially different LMD calendar time distributions).
2. No analysis of pregnancy loss for unvaccinated women during the pandemic: An assessment of whether pregnant women remaining unvaccinated throughout their pregnancy during the pandemic have higher- or lower-than-expected pregnancy loss would provide valuable information about these potential pandemic-related residual confounders.
The authors explain why they did not consider formal trial emulation design matching pregnant women in wk8-13 with women not vaccinated at that time and tracking the pairs over time for pregnancy loss events. While more rigorous in estimating potential causal effects of vaccination, they explained this would not be feasible, lacking the required sample size, especially to investigate their small, select wk8-13 vaccinated cohort.
However, they could easily repeat their observed-expected analysis on various unvaccinated cohorts to assess whether they have higher- or lower-than-expected pregnancy loss during the pandemic.
Repeating these analyses separately for cohorts of unvaccinated pregnant women, with LMPs matching their various vaccination cohorts determined by vaccination time (8-13wk, 14-27wk, pre-pregnancy) and dose (COVID 1st dose, COVID 3rd dose, flu) would provide a valuable control group with the same pandemic-related calendar time confounding as the various vaccination cohorts.
If the various unvaccinated cohorts had similar pregnancy loss relative to expected, then this would provide assurance that pandemic-related calendar time confounding is not a major issue, while if they were, then it would provide some context to consider when interpreting the primary analysis results.
3. No assessment of whether COVID-19 infections later in pregnancy were factors in post-vaccination pregnancy losses: The authors performed a useful analysis showing women with COVID-19 infections in wk8-13 or wk14-27, either vaccinated or unvaccinated, did not have higher-than-expected pregnancy loss.
However, they did not investigate the effect of COVID-19 infections later in pregnancy, nor did they summarize whether pregnancy losses in their primary analysis cohorts occurred more often in women who eventually became infected.
They could easily summarize number of COVID-19 infections in subsets of their vaccination cohorts with and without pregnancy losses, and if no association is evident this would provide more convincing evidence COVID-19 infection is not an unmeasured confounder affecting the results.
Note that the cohort of women receiving 1st dose at gestational age 8-13wk, vast majority of whom had LMP 10/20-1/21, would have experienced the Delta COVID-19 wave hitting Israel between 4/21 and 7/21 late in their pregnancies.
4. Incomplete summary of results of vaccination before pregnancy: Their paper provided a cursory look at observed and expected pregnancy loss rates for women vaccinated for COVID-19 or flu before pregnancy as a function of gestational week.
This suggested slightly lower-than-expected pregnancy loss rates in these subgroups, but the huge y-axis scale makes it difficult to see the magnitude of the difference or its potential significance.
They did not present tables aggregating observed and expected pregnancy loss after weeks 8, 14, 20, or 25, as was done for the vaccination cohorts.
They could easily provide such tables, which would add important contextual information to the paper.
5. Assess wk1-7: They did not assess whether pregnancy loss in those vaccinated in the first half of the first trimester (before wk8) was lower or greater than expected, which would provide a more complete assessment of potential risks of early-pregnancy COVID vaccination.
They mention that pregnancies in wk1-7 have more inconsistent identification and follow-up times, but this is true for all cohorts, vaccinated and unvaccinated cohorts during the pandemic as well as the pre-pandemic cohorts used to compute vaccination rate.
And although not able to accurately measure pregnancy loss from miscarriages in the first half of the first trimester, there is no reason why they could not perform a valid observed-expected analyses for the various vaccination cohorts for pregnancy loss at the time periods used in their primary and secondary analyses (after 8, 14, 20, or 25 weeks).
This is important to investigate, given the title of the paper says “during early pregnancy”, so will be interpreted by many as finding greater-than-expected risk for women vaccinated anytime early in pregnancy, including <8wk, as well.
6. Summarize pregnancy loss types: Their results are all based on the aggregated “pregnancy loss” outcome comprised of spontaneous abortions, induced abortions, and stillbirths.
While there is rationale for combining these events for formal analysis, they could easily include a table listing the number of pregnancy losses at the various time points (after week 8, 14, 20, 25) for the various cohorts split out by outcome, so one could see how many of the pregnancy losses were stillbirth, induced abortion, or spontaneous abortion events.
Conclusion:
This paper provides a model for how to perform rigorous active reporting vaccine safety studies to identify potential safety signals from EHR data — the data and study are exceptionally done and paper clearly written.
Their primary conclusion is what those pregnant women Covid vaccinated at second half of first trimester (wk8-13) had observed pregnancy loss rates higher than expected rate computed from pre-pandemic regression model involving various factors computable from EHR, which they considered a potential safety signal for these vaccines.
They found no such increased risk in flu vaccinated during or prior to the pandemic, and found the greater-than-baseline risk in this cohort included both early and late pregnancy loss, which gave them greater confidence in their results.
Their analyses also found significantly lower-than-expected risk of pregnancy loss for women COVID vaccinated in the second trimester (wk14-27) and slightly lower-than-expected risk in women COVID vaccinated before pregnancy, but the authors dismissed these results as artifacts of residual confounding (HVE)
It is not valid to dismiss any results showing lower-than-expected pregnancy loss in vaccinated subgroups as driven by residual confounding (HVE), without acknowledging that the higher-than-expected pregnancy loss in one small vaccinated subgroup (covid vaccinated wk8-13) could similarly be driven by residual confounding.
The strength of their conclusions is limited by their failure to consider other types of controls that would more carefully investigate residual confounding from pandemic-related and other factors, including unvaccinated controls during the pandemic, as well as their failure to provide more details on the pregnancy loss outcomes for women vaccinated before pregnancy and those vaccinated in the first half of the first trimester (before week 8).
Inclusion of these results would provide a more complete and transparent picture of potential COVID vaccine effects on pregnancy loss.
Btw I was looking at the wrong plot when I discussed the 3rd dose wk8-13 cohort
It appears to be almost wholly driven by resulting week 11 with the other weeks between 8-13 having similar-to-baseline pregnancy loss risk.
So the evidence for safety signal in early pregnancy for 3rd dose is weak.
Thus, their key results are dependent on the single 1st dose wk8-13 cohort, for which >90% had LMP between October 2020 and January 2021 just before rollouts began.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Cleveland Clinic researchers have posted a preprint on medRxiv assessing the flu infection rate among employees who received the flu vaccine relative to those who did not receive the flu vaccine.
Using a Cox regression model while adjusting for age, sex, clinical nursing job, and location (but not propensity to test for flu), they found the vaccinated had a hazard ratio of 1.27, concluding the vaccinated had a 27% higher risk of infection and thus concluding -27% negative vaccine effectiveness.
As in their previous studies on COVID-19 vaccines (cited by many as evidence of negative vaccine effectiveness), this study suffers the same fatal flaw of testing bias, with vaccinated significantly more likely to test for flu than unvaccinated, 27% more likely in fact.
They try to dismiss this concern by claiming the test positivity is equivalent between vaccinated and unvaccinated based on a strange and poorly justified linear regression analysis.
But when looking at the data in their figures 1-2 and summarizing on an appropriate scale, it is clear that the test positivity is substantially lower in vaccinated than unvaccinated (20% lower), so the testing bias cannot be dismissed, and suggesting the increased testing in vaccination is a bias that can explain their HR=1.27 effect, and not indication of higher infection rates.
Thus, their conclusion of -27% negative vaccine effectiveness is not supported by their data.
They compute the daily testing rate in vaccinated and unvaccinated and plotted the ratio of vaccinated/unvaccinated in Table 1a, from which they acknowledge the vaccinated test at a significantly higher rate than unvaccinated.
It is to their credit they acknowledge this.
The increased testing could represent a bias that compromises their conclusion of vaccine effectiveness, or it could be an indication of higher infections, since a 27% higher testing rate could simply be an indicator of 27% higher infection rates.
The test positivity (% of flu tests positive) can be used to assess this possibility.
If the higher confirmed infection rate were simply a function of testing bias, then the testing positivity would be lower in vaccinated than unvaccinated, indicating overtesting, and thus a ratio of testing positivity in vaccinated/unvaccinated would be less than 1.
If the higher testing rate were simply from higher infection rates and not testing bias, we'd expect the testing positivity to be equivalent between vaccinated and unvaccianted, with a ratio near 1.
David Geier has apparently been commissioned by HHS secretary RFK, Jr. to study potential links between vaccination and autism.
He has papers in the past using vaccine safety datalink (VSD) data to perform case-control studies assessing potential links between exposure to Thimerisol-containing hepatitis B vaccines as infants and later diagnosis of atypical autism, or of obesity.
He found highly significant associations in both studies -- linked below -- and conclude this proves that Thimerisol in Hepatitis-B vaccines significantly increases risk of atypical autism, and of obesity.
However, MAJOR PROBLEM:
In both studies his cases and controls were during different time periods,
In the autism study, the dates were:
Autism Cases: 1991-1998
Controls: 1991-1992.
In the obesity study, the dates were:
Obesity Cases 1991-2000
Controls: 1991-1994.
Besides the arbitrary nature of these choices of dates raising suspicion, the use of different dates for cases and controls raises major concerns of time-confounding, especially if Hepatitis-B vaccination rates differed sustantially over the 1991-2000 time period.
See link below to the Hepatitis-B vaccination rates over time -- indeed the vaccination rate increased from <10% in 1991 to ~30% in 1994 to 85-90% in 1998-2000.
Thus, the case-control status is almost COMPLETELY CONFOUNDED with vaccination status, since very few would be vaccinated in 1991-1994 and vast majority vaccinated by 1996-2000.
Thus, the vaccinated would be SEVERELY OVERREPRESENTED in the cases and SEVERELY UNDERREPRESENTED in the controls, producing a spurious association between the exposure and case/control status no matter what outcome is being studied.
In other words, the methodology used in the studies was completely invalid, and the studies fatally flawed.
I hope they do better in the upcoming studies -- ideally they should include experienced biostatisticians and epidemiologists to help use a valid study design and analysis, and ensure that their interpretation of the study is supported by the empirical evidence in the data.
Let's do an exercise to show how their study flaw can generate false associations and invalid conclusions.
Here is a plot of live births and cell phone usage in Sri Lanka from 2001-2009.
Note that cell phone usage increased from <10% in 2001 to 80% by 2009.
We will ask, "Is a woman's cell phone use associated with their likelihood of having a live birth in Sri Lanka?"
We will make the same case/control design mistake made by Geier to introduce the exact same type of time bias and show how it leads to completely spurious results and invalid conclusions.
Consider the following case/control study, defining:
case = woman had a live birth that year
control = woman did not have a live birth that year
As with Meier's study, let's consider cases and controls from different time frames,
cases from 2001-2009 and
controls from 2001-2004
In this illustration, the exposure is "cell phone usage" and we we want to assess if cell phone usage appears related to likelihood of a women having a live birth in a given year.
There is talk from HHS that fluoride in water mau lead to reduced cognitive levels in children.
There is a new published paper on a Bangladeshi study assessing links between urinary fluoride levels and cognitive performance of children at 5yrs and 10yrs that is being touted as evidence of this link.
The authors conclude links between prenatal maternal urinary fluoride levels and children's cognitive levels at 5yrs and 10yrs, between children's urinary fluoride levels at 10yrs and cognitive levels at 10yrs, and also say there is a negative association between urinary fluoride levels at 5 years and cognitive levels at 5yrs and 10yrs that is not statistically significant.
However, when looking at the data and analyses in the paper in detail, the evidence for these links are at best very weak, and the paper really shows very little evidence.
In this thread, I will explain some of the data and statistical details that lead me to that conclusion.
The primary question of this study is whether fluoride exposure from water is linked with lower cognition, which in this study involves analyzing potential associations between urinary fluoride levels at 5yrs and and cognitive scores at 5yrs and 10yrs, urinary fluoride levels at 10yrs with cognitive scores at 10yrs, and prenatal urinary fluoride levels at 8wks gestation with cognitive scores at 5yrs and 10yrs.
They propose that given fluoride's low half-life, the urinary fluoride levels should be a reasonable surrogate for fluoride levels in the water, and they measure water fluoride levels at the 10yr time point to show evidence of some correlation with the urinary levels.
While the authors confidently conclude evidence for a link, when one looks at the actual data there is little to no evidence of any link, and they have to take a series of unusual and potentially questionable statistical steps to arrive at these conclusions.
I will describe what I mean.
Given that the primary hypothesis is whether fluoride in the water leads to decreased cognitive abilities in children, I will start with the analysis of children's urinary fluoride levels and their cognitive test results at 5yrs and 10yrs.
Many people misunderstand "indirect costs" (i.e. F&A) from NIH grants, thinking these are superfluous or unnecessary costs or a "gift" to universities.
Indirect costs are actually "Facility and Administrative" (F&A) expenses that are not allowed to be included in the direct cost budget of an NIH grant, but are essential for research.
Below is a video that explains how the system works, what F&A expenses cover, how they are determined, and why they are essential, and how the proposed severe reduction of 65-80% of these expenses endangers the entire USA research enterprise that leads the world and provides great societal benefit, technologically, medically, and economically.
The indirect costs, or Facilities and Administrative (F&A) expenses, are not a "tip" or "fluff" for the universities, but cover essential research costs that are not allowed to be included in the direct cost budget for an NIH grant, including:
* Laboratory and other research-specific facilities
* Utilities including electricity, gas, water, HVAC
* Shared research instruments
* Maintenance and security for research facilities
* IT and cybersecurity infrastructure
* Administrative staff needed to meet federal requirements including safety, compliance, and research operations.
Research could not be done without these expenses, and without F&A it is not clear how they can be covered.
Typically, the indirect costs from grants only partially cover these expenses, with universities and research centers having to cover 30-50% of these required expenses through other means.
As a result, universities subsidize federal research, and typically lose a substantial amount of $$ for their research enterprise.
Journal of Infection and Public Health just published a paper summarizing case fatality rate (CFR) and infection mortality rate (IFR) for all of Austria from February 2020 through May 2023.
Critically, they also split out results over time, by variant, age group, vaccination and previous infection status, sex, and nursing home residency status.
Key results were: 1. CFR was much higher in older age groups and nursing home residents 2. CFR decreased greatly with later variants, coincident with increases in population immunity levels from vaccination and previous infection. 3. Vaccinated individuals had substantially lower CFR for each variant/age group 4. Those surviving previous infection had substantially lower CFR for each variant/age group. 5. The immune protection evident in reduced CFR was similar in vaccinated and those surviving previous infection for each variant/age group.
In this thread, I will summarize and try to make sense of its key results.
There are other papers looking at CFR/IFR, but none as expansive or rigorous as this one.
Austria had very restrictive early mitigation measures and mass vaccination like most higher income countries, which makes its results relevant to many richer Western countries.
Also, Austria had the highest SARS-CoV-2 testing rate among non-island nations in the world, making it ideal for this study since they had fewer undocumented infections than other countries.
Most results presented in the paper are of case fatality rate (CFR), computed as:
This study also estimated infection fatality rate (IFR), imputing the estimated number of undocumented infections using a model based on testing positivity rate (TPR=% of positive tests), with the reasoning that time periods with high TPR should have higher numbers of undocumented infections.
CFR was significantly higher than IFR during 2020 when testing was more sparse, but starting 2021 less so given Austria’s extraordinarily high testing rate.
My colleagues and I just published a paper in eClinicalMedicine evaluating effects of vaccination on long COVID risks in children and adolescents during the Delta and early Omicron periods.
These data were from the RECOVER network including 21 pediatric hospital networks from all over the USA, including 112,590 adolescents during the Delta period, and 84,735 adolescents and 188,894 children during the early Omicron period.
Long COVID-19 (post-acute sequelae of SARS-CoV-2, PASC, or multi-system inflammatory syndrom, MIS) was defined using a symptom-based computable phenotype definition based on five body systems.
Our analyses utilized propensity score weighting to adjust for confounding from age, demographics, medical co-morbidities as well as healthcare utilization including past COVID-19 testing practices, and we used proximal analyses with negative control exposures and outcomes to investigate and adjust for potential residual bias from unmeasured confounders.
In adolescents 12-20yrs, we found vaccination resulted in 95.4% reduced risk of long COVID-19 during the Delta period, and 75.1% during the Omicron period.
In children 5-11yrs, we found vaccination resulted in 60.2% reduced risk of long COVID-19 during he Omicron period.
To evaluate how much of this vaccine protection was from reduced risk of infection and how much was reduced risk of long COVID-19 independent of any effect in reducing infection, we performed a causal mediation analysis to split the total vaccine effect into indirect effects, mediated through reducing risk of infection, and direct effects, independent of any reduced risk of infection.
Again, propensity score weighting was used to carefully adjust for potential confounders.
We found that the protective effect of vaccines on long COVID-19 was almost wholly mediated through its reduced risk of infection.
Various sensitivity analyses were done and included in the online supplement along with a detailed description and explanation of all methods and modeling decisions.
These data were from the RECOVER network including 21 pediatric hospital networks from all over the USA, including 112,590 adolescents during the Delta period, and 84,735 adolescents and 188,894 children during the early Omicron period
Our analyses utilized propensity score weighting to adjust for confounding from age, demographics, medical co-morbidities as well as healthcare utilization including past COVID-19 testing practices, and we used proximal analyses with negative control exposures and outcomes to investigate and adjust for potential unmeasured confounders.