Paper measured immune markers (antibodies, T-cells, B-cells) from 61 individuals vaccinated with Pfizer/Moderna at 6 time points, from pre-vax to 6m post-vax.
16 were previously infected with SARS-CoV-2 and 45 SARS-CoV-2 naïve, and analysis was stratified by previous infection.
The key results were:
1.Neutralizing antibodies (NAbs) decreased over time
2.Memory B cells (Bcells) increased over time and did not wane
3.Helper T cells (T4) and Killer T cells (T8) dynamic described
Antibody levels (for spike/receptor binding domain) spiked after vaccination, & declined 10x in 6m, but remained above baseline levels for previous infected.
This reduction of circulating antibodies might explain the waning efficacy vs. asymptomatic/mild symptomatic disease.
Here are the levels for all individuals. We see heterogeneity, with some having much higher levels than others.
When we say "immunity is waning" we must remember it is waning for SOME, not all.
But antibodies always wane. Long term protection comes from memory B & T cells.
Memory B-cells rapidly generate new Abs when later exposed to the virus.
These continued to increase, not decline, in 6m after vax (blue) and had similar levels in previously infected at 6m (red)
Here are the individual levels. Again note the heterogeneity but most maintain high B-cell levels.
People worry about immune escape in Alpha (B.1.1.7), Beta (B.1.351), and Delta (B.1.517.2+)
They measured B-cell binding as % of wild type (D614G).
B-cells bound well for all variants, with little immune escape for Alpha, the most for Beta, and intermediate levels for Delta.
Looking at 6m values for individuals, we see most had strong binding (~90% for Alpha, 75% Delta, 60% Beta) but a subset clearly had less protection. Heterogeneity.
Also note that at 6m, vaccinated and previously infected (vaccinated or not) had similar levels of B-cell binding
They have other interesting results looking at T-cells and looking deeper into various aspects of the immune response – this paper is much worth reading.
In summary, we see mRNA vaccines induce strong multi-modal immune response, including high Nabs as well as memory B/T cells
After 6m, most notably Abs & T cells decline the most, & might help explain increased breakthrough infections, while memory B cells remain strong.
It is possible that the “waning immunity” is not a total loss of immune protection, but rather a delay of immune response, with reduction of circulating Nabs requiring generation of new ones by B-cells (and may only affect a subset of people)
I appreciate that @joshg99 has unblocked me, allowing me to respond to his thoughtful comments on my previous post discussing his recent preprint. The paper reports a potential safety signal for pregnancy loss among women vaccinated between 8–13 weeks of gestation.
Before responding to specific points, I want to better summarize and explain the points I made in my analysis, and then will refer back to these in my responses.
First, I want to emphasize that there is much to appreciate about this paper:
1. It uses a large, high-quality healthcare dataset that is far superior to passive surveillance systems like VAERS, which lack control groups and suffer from heterogeneous reporting biases. 2. The modeling is careful and the writing is clear. 3. The authors responsibly acknowledge that their findings are not causal and require further validation.
That said, the paper’s central conclusion, a potential safety signal, is driven almost entirely by a small, highly selected subgroup: women receiving their first COVID vaccine dose between gestational weeks 8 and 13. There is no indication this window was pre-specified in a protocol or analysis plan.
The authors also suggest a safety signal for third doses in the 8–13 week window. However, their own observed-minus-expected plot shows this appears to be concentrated in a single gestational week (likely week 10, see below), raising questions about the robustness of that result.
To fully evaluate the strength and validity of the conclusions, several additional analyses are essential.
These fall into four main areas, which I will next summarize.
1. Calendar Time Confounding
The pivotal 1st dose 8–13 wk cohort includes just 1.9% of pregnancies and 1.8% of pregnancy losses, and >90% of this group had last menstrual periods (LMPs) between Oct 2020 and Jan 2021: i.e., immediately after vaccine rollout.
To assess potential confounding by calendar time:
1. The LMP distribution is only shown for the pivotal COVID 1st dose 8–13 wk group. It should also be shown for all of the other vaccination cohorts, especially the flu 8–13 wk group used as a control.
2. The authors should present observed-minus-expected pregnancy loss results for unvaccinated women, both overall (for completeness and transparency) and for subcohorts with LMP distributions matched to each vaccinated group (e.g., 1st dose 8–13 wk, 3rd dose 8–13 wk, 14–27 wk groups, flu 8–13 wk, etc.)
3. It would also be informative to present observed-minus-expected pregnancy loss for unvaccinated women as a function of LMP date (monthly?) throughout the pandemic, which would be a nice descriptive analysis of potential calendar time confounding.
Differences in timing could influence results through factors like lockdowns, pandemic wave exposures, and healthcare access and utilization.
Showing raw and observed-minus-expected outcomes for matched unvaccinated cohorts would help clarify whether calendar time is a source of bias. Differences would warrant caution in interpretation; consistency would increase confidence.
2. Impact of COVID Infections on Pregnancy Loss
The authors presented results summarizing observed-minus-expected pregnancy loss for women experiencing covid infections during wk8-13 or wk14-27 of their pregnancies, but they did not present the same results for women experiencing covid infections during late pregnancy (wk28+).
This is important to consider, given that the authors point out that many of the pregnancy losses in the pivotal 8-13wk 1st dose group occurred late and that these women vaccinated during early pregnancy soon after rollout would have specifically been exposed to the Delta covid wave during their late pregnancy.
Recommended additions: 1. Present pregnancy loss rates after 28 weeks for women with COVID infections during pregnancy. 2. Include 2×2 tables (pregnancy loss: yes/no × COVID infection during pregnancy: yes/no) for each cohort, particularly the 8–13 wk COVID-vaccinated groups. If there is no association, it would suggest Delta infections were not a primary factor explaining excess late losses.
A new preprint has posted online assessing potential safety signals for pregnancy loss after mRNA vaccination during pregnancy using Israeli electronic health records (EHR data), with first author @joshg99 and senior author @RetsefL (newly appointed member of CDC's ACIP).
Their primary conclusions were that pregnant women vaccinated for COVID in the second half of the first trimester (wk8-13) had greater observed pregnancy loss rates than expected from the pre-pandemic regression model computable from EHR.
They also found that pregnant women vaccinated for COVID in the second trimester (wk14-27) and pregnant women vaccinated for flu from wk8-13 or wk14-27 had significantly lower-than-expected observed pregnancy loss rates during the pandemic,
and that women vaccinated for COVID or flu before pregnancy had slightly lower-than-expected pregnancy loss rates. They attribute these results to residual confounding (i.e. healthy vaccinee effect HVE)
The paper is exceptionally well written and introduces a rigorous approach for identifying potential safety signals from EHR data, an active reporting approach that avoids the key limitations of passive reporting systems (like VAERs in USA, AEFI in Israel): (1) reporting bias and (2) the lack of control group.(poorly understood limitations of these systems that I have harped upon ad nauseum on social media)
However, the paper has some key omissions that limit the ability to carefully interpret the results, including: 1. Failure to investigate and fully account for pandemic-related calendar time varying confounders. 2. Lack of assessment of whether women remaining unvaccinated throughout pregnancy during the pandemic had higher- or lower-than expected pregnancy loss. 3. No assessment of whether COVID-19 infections later in pregnancy were factors in post-vaccination pregnancy losses, especially for the primary cohorts. 4. Incomplete summary of results of vaccination before pregnancy 5. Lack of assessment of whether women vaccinated before week 8 of their pregnancy had higher- or lower-than-expected pregnancy loss. 6. Lack of summary of which type of pregnancy loss outcomes (spontaneous abortion, induced abortion, stillbirth) dominated the events for each modeled cohort.
The authors’ observed-expected analysis approach could be readily applied to perform each of these suggested analyses.
Given that the primary vaccinated subgroups driving their conclusions are very small (e.g. 1st dose wk8-13 cohort being 1.9% of pregnancies and 1.8% of pregnancy losses) and with pregnancies during specific times during the pandemic (e.g. ~90% of 1st dose wk8-13 having last menstrual period (LMP) between 10/2020 and 1/2021), there is concern for remaining residual confounding in these cohorts, from pandemic-related or medically-related confounders, and these analyses could shed more light on whether this concern is significant or not.
The inclusion of these results would provide a more complete and transparent picture of potential COVID vaccine effects on pregnancy loss.
It is not valid to dismiss any results showing lower-than-expected pregnancy loss in vaccinated subgroups as driven by residual confounding (by claiming HVE) without acknowledging that the higher-than-expected pregnancy loss in one small vaccinated subgroup (covid vaccinated wk8-13) could similarly be driven by residual confounding.
This thread will walk through the key details of their study and results, and elaborate some on these concerns.
This paper (see link below) sets out to investigate whether there is a potential safety signal for pregnancy loss when vaccinated pregnant women for COVID-19 during the pandemic.
Their study is based on electronic health record (EHR) data from Maccabbi, one of four HMO’s in Israel, accounting for 26% of the Israeli population.
As mentioned above, this qualifies as an active reporting system study of vaccine safety that is much more rigorous than passive reporting systems (e.g. VAERs in USA or AEFI in Israel), since the EHR contains events for control groups and does not have the same highly variable reporting bias.
Their endpoint of interest was pregnancy loss, which includes 3 outcomes: 1. Spontaneous abortion 2. Induced abortion (elective or medically-indicated) 3. Stillbirth.
All of their analysis were done on the pooled endpoint, not split out by each outcome.
They mentioned that the EHR lacked evidence for whether induced abortions were elective or medically indicated.
Cleveland Clinic researchers have posted a preprint on medRxiv assessing the flu infection rate among employees who received the flu vaccine relative to those who did not receive the flu vaccine.
Using a Cox regression model while adjusting for age, sex, clinical nursing job, and location (but not propensity to test for flu), they found the vaccinated had a hazard ratio of 1.27, concluding the vaccinated had a 27% higher risk of infection and thus concluding -27% negative vaccine effectiveness.
As in their previous studies on COVID-19 vaccines (cited by many as evidence of negative vaccine effectiveness), this study suffers the same fatal flaw of testing bias, with vaccinated significantly more likely to test for flu than unvaccinated, 27% more likely in fact.
They try to dismiss this concern by claiming the test positivity is equivalent between vaccinated and unvaccinated based on a strange and poorly justified linear regression analysis.
But when looking at the data in their figures 1-2 and summarizing on an appropriate scale, it is clear that the test positivity is substantially lower in vaccinated than unvaccinated (20% lower), so the testing bias cannot be dismissed, and suggesting the increased testing in vaccination is a bias that can explain their HR=1.27 effect, and not indication of higher infection rates.
Thus, their conclusion of -27% negative vaccine effectiveness is not supported by their data.
They compute the daily testing rate in vaccinated and unvaccinated and plotted the ratio of vaccinated/unvaccinated in Table 1a, from which they acknowledge the vaccinated test at a significantly higher rate than unvaccinated.
It is to their credit they acknowledge this.
The increased testing could represent a bias that compromises their conclusion of vaccine effectiveness, or it could be an indication of higher infections, since a 27% higher testing rate could simply be an indicator of 27% higher infection rates.
The test positivity (% of flu tests positive) can be used to assess this possibility.
If the higher confirmed infection rate were simply a function of testing bias, then the testing positivity would be lower in vaccinated than unvaccinated, indicating overtesting, and thus a ratio of testing positivity in vaccinated/unvaccinated would be less than 1.
If the higher testing rate were simply from higher infection rates and not testing bias, we'd expect the testing positivity to be equivalent between vaccinated and unvaccianted, with a ratio near 1.
David Geier has apparently been commissioned by HHS secretary RFK, Jr. to study potential links between vaccination and autism.
He has papers in the past using vaccine safety datalink (VSD) data to perform case-control studies assessing potential links between exposure to Thimerisol-containing hepatitis B vaccines as infants and later diagnosis of atypical autism, or of obesity.
He found highly significant associations in both studies -- linked below -- and conclude this proves that Thimerisol in Hepatitis-B vaccines significantly increases risk of atypical autism, and of obesity.
However, MAJOR PROBLEM:
In both studies his cases and controls were during different time periods,
In the autism study, the dates were:
Autism Cases: 1991-1998
Controls: 1991-1992.
In the obesity study, the dates were:
Obesity Cases 1991-2000
Controls: 1991-1994.
Besides the arbitrary nature of these choices of dates raising suspicion, the use of different dates for cases and controls raises major concerns of time-confounding, especially if Hepatitis-B vaccination rates differed sustantially over the 1991-2000 time period.
See link below to the Hepatitis-B vaccination rates over time -- indeed the vaccination rate increased from <10% in 1991 to ~30% in 1994 to 85-90% in 1998-2000.
Thus, the case-control status is almost COMPLETELY CONFOUNDED with vaccination status, since very few would be vaccinated in 1991-1994 and vast majority vaccinated by 1996-2000.
Thus, the vaccinated would be SEVERELY OVERREPRESENTED in the cases and SEVERELY UNDERREPRESENTED in the controls, producing a spurious association between the exposure and case/control status no matter what outcome is being studied.
In other words, the methodology used in the studies was completely invalid, and the studies fatally flawed.
I hope they do better in the upcoming studies -- ideally they should include experienced biostatisticians and epidemiologists to help use a valid study design and analysis, and ensure that their interpretation of the study is supported by the empirical evidence in the data.
Let's do an exercise to show how their study flaw can generate false associations and invalid conclusions.
Here is a plot of live births and cell phone usage in Sri Lanka from 2001-2009.
Note that cell phone usage increased from <10% in 2001 to 80% by 2009.
We will ask, "Is a woman's cell phone use associated with their likelihood of having a live birth in Sri Lanka?"
We will make the same case/control design mistake made by Geier to introduce the exact same type of time bias and show how it leads to completely spurious results and invalid conclusions.
Consider the following case/control study, defining:
case = woman had a live birth that year
control = woman did not have a live birth that year
As with Meier's study, let's consider cases and controls from different time frames,
cases from 2001-2009 and
controls from 2001-2004
In this illustration, the exposure is "cell phone usage" and we we want to assess if cell phone usage appears related to likelihood of a women having a live birth in a given year.
There is talk from HHS that fluoride in water mau lead to reduced cognitive levels in children.
There is a new published paper on a Bangladeshi study assessing links between urinary fluoride levels and cognitive performance of children at 5yrs and 10yrs that is being touted as evidence of this link.
The authors conclude links between prenatal maternal urinary fluoride levels and children's cognitive levels at 5yrs and 10yrs, between children's urinary fluoride levels at 10yrs and cognitive levels at 10yrs, and also say there is a negative association between urinary fluoride levels at 5 years and cognitive levels at 5yrs and 10yrs that is not statistically significant.
However, when looking at the data and analyses in the paper in detail, the evidence for these links are at best very weak, and the paper really shows very little evidence.
In this thread, I will explain some of the data and statistical details that lead me to that conclusion.
The primary question of this study is whether fluoride exposure from water is linked with lower cognition, which in this study involves analyzing potential associations between urinary fluoride levels at 5yrs and and cognitive scores at 5yrs and 10yrs, urinary fluoride levels at 10yrs with cognitive scores at 10yrs, and prenatal urinary fluoride levels at 8wks gestation with cognitive scores at 5yrs and 10yrs.
They propose that given fluoride's low half-life, the urinary fluoride levels should be a reasonable surrogate for fluoride levels in the water, and they measure water fluoride levels at the 10yr time point to show evidence of some correlation with the urinary levels.
While the authors confidently conclude evidence for a link, when one looks at the actual data there is little to no evidence of any link, and they have to take a series of unusual and potentially questionable statistical steps to arrive at these conclusions.
I will describe what I mean.
Given that the primary hypothesis is whether fluoride in the water leads to decreased cognitive abilities in children, I will start with the analysis of children's urinary fluoride levels and their cognitive test results at 5yrs and 10yrs.
Many people misunderstand "indirect costs" (i.e. F&A) from NIH grants, thinking these are superfluous or unnecessary costs or a "gift" to universities.
Indirect costs are actually "Facility and Administrative" (F&A) expenses that are not allowed to be included in the direct cost budget of an NIH grant, but are essential for research.
Below is a video that explains how the system works, what F&A expenses cover, how they are determined, and why they are essential, and how the proposed severe reduction of 65-80% of these expenses endangers the entire USA research enterprise that leads the world and provides great societal benefit, technologically, medically, and economically.
The indirect costs, or Facilities and Administrative (F&A) expenses, are not a "tip" or "fluff" for the universities, but cover essential research costs that are not allowed to be included in the direct cost budget for an NIH grant, including:
* Laboratory and other research-specific facilities
* Utilities including electricity, gas, water, HVAC
* Shared research instruments
* Maintenance and security for research facilities
* IT and cybersecurity infrastructure
* Administrative staff needed to meet federal requirements including safety, compliance, and research operations.
Research could not be done without these expenses, and without F&A it is not clear how they can be covered.
Typically, the indirect costs from grants only partially cover these expenses, with universities and research centers having to cover 30-50% of these required expenses through other means.
As a result, universities subsidize federal research, and typically lose a substantial amount of $$ for their research enterprise.