@Jabaluck acknowledges the concern about effective randomization because the survey teams that enrolled participants were more motivated to enroll patients in villages randomized to masks. [~14,000 more Pts were in the mask intervention than control.]
This imbalance creates a potential fundamental problem because the 1º endpoint (symptomatic sero positive patients) only differed by a total of 20 cases in the 10,000 out of 300,000 patients that were convinced to give blood samples.
@Jabaluck argues that the positivity rate was the preregistered endpoint, not the # of positives. There is nothing bad about this, but it does mean you have 2 be very careful about getting the denominators right
What if survey members with different instructions had recruited 5,000 fewer people in the treatment village? Could this have resulted in 5 less symptomatic seropositives. When the total case difference is 20, 5 is a big number.
Additionally, the lack of blinding of the study members assigned to observe things like rate of masking is relevant. Could they have been biased to report more masking than there was ?
What if 20k rather than 10k agreed to give blood samples?
@abaluck argues that none of these biases are significant enough to effect the bottom line. He cites secondary analyses that trend in the same ‘mask protective’ direction .. perhaps.. (there are medical correlates which are easier for me to connect that I mentioned in pod)
The other issue relates to whether the difference found between the 2 groups is random chance or not. This is where statistical significance and Dr. Wang comes in..
Using the same data but a different stat model, a non significant result emerges
The differences seen here are not small between statistical method used. For risk ratios , confidence intervals crossing 1 means what you’ve found is probably insignificant, here we go from 0.78-0.997 to .118-6.79 for surgical masks with baseline controls
I don’t know what to make of that difference, perhaps the two econometricians @excel_wang & @Jabaluck can figure it out ?
I think empirically there’s enough here to say the study was a tremendous effort that requires faith in a lot of behind the scenes assumptions to come to the conclusion that community masking is effective.
I know @Jabaluck disagrees .. but just as DANAMASK doesn’t exclude the possibility masks work, this study can’t possibly exclude the possibility masks don’t work.
This is a bit important as some policy makers consider mask forever mandates, and increasingly entertain punitive measures to ensure compliance bc the data is ‘unequivocal’
The last 30 minutes we posed the ? Of exactly how much we should be slaves to empiricism even if there was a clear empiric signal everyone could agree on.
While I suspect @Jabaluck would like to quantify emotions that defy empiricism at the moment ( how do we weigh grandma’s joy at attending her grandsons 5th bday party vs the risk to her, and the risk to those in her community) , I hope he won’t succeed anytime soon :)
Again tx to @Jabaluck and team for allowing these conversations .. I hope this doesn’t dissuade others from putting out raw data as well.
Btw. Anyone know if the 600 village chinese cluster RCT on salt substitution put out their raw data ??
Again kudos to @beenwrekt for taking the trouble to find out what the raw numbers actually were in the Bangladesh mask RCT that’s been used in court to support school mask mandates.
The difference between the raw data and what was presented in the Preprint is striking 1/
Here is the verbiage from the study —> an 11% relative risk reduction in symptomatic seroprevalence with the treatment group that was given surgical masks,
The tables to support these words are here ..
The authors could have chosen to give us the actual raw numbers of symptomatic sero positives in treatment vs control, but instead we get interventional prevalence ratios and interventional coefficients ..
Brief summary for those interested. Bangladesh mask was a cluster RCT, (cluster because unit of randomization was a village) Treatment group had public policy intervention to increase use of masks, Control group was basically a poorly enforced govt. mask mandate)
Per pre-print 342,126 individuals in study. Endpoint was COVID 19 +ve symptoms AND positive antibodies.
Key Table shows of ~150k pts in each arm, blood samples could only be collect from ~5k patients in each arm.
I did appreciate the conversation, but it’s telling that one of the main data points @drsanjaygupta , chief medical correspondent @CNN ,chose to educate @joerogan on probably isn’t correct.
In order to be a vaccine provider in philly, months long application process, hours of webinar (mid-day), upload vaxx administered, wasted, in stock to 2 different websites every 24 hours, unpredictable allocation from local DOH, 30 day expiration in a -4 fridge. 1/
After we vaxxed everyone who wanted vax in the practice, walked to almost every business <1mile, street cleaning crews, random passerby’s etc .. declining demand / reg. requirements made it 2 hard to maintain vax stock.
Federal allocation schemes that have wide popular support generally favor big players that can navigate regulatory thicket, grease the right wheels to get early disbursements of product, (and make a small killing doing it)
In this preprint , the VAERS database was interrogated for anyone given a diagnosis of myocarditis/pericarditis/myopericarditis/chest pain AND appears to require an abnormal very sensitive blood marker of cardiac damage (troponin)
A few bites about the VAERs database. It was legislated into existence via the National Childhood Vaccine Injury Act (NCVIA) in 1986, which was a mechanism to shield vax manufacturers from litigation related to potential adverse events after getting vaccinated