@K_Sheldrick BTW, I checked your p-values for Fisher's exact tests and they appear correct.
Indeed, it appears unusual for there not to be more smaller p-values given the expected uniform distribution, assuming they reported all factors and not just a subset that appeared similar ...
@K_Sheldrick Now, it is possible that they compared the 2 groups for more than these 22 factors, and only presented these 22 (perhaps because they appeared to demonstrate balance in randomization) and omitted the others.
@K_Sheldrick If we assume that did not occur and these are the only 22 they looked at, then we can use permutation tests to assess the rarity of observing p-value distributions like this under the null hypothesis of no association between randomization and any of these factors.
@K_Sheldrick We can do this by conditioning on the marginals for each of the 22 2x2 contingency tables, but randomly reallocating the counts within the contingency table, and then computing the Fisher's exact test for each.
@K_Sheldrick If we repeat this process for many (e.g. 10,000) random permutations, we get a set of 10,000 Fisher's exact test statistics (and p-values) for each of the 22 factors.
@K_Sheldrick To see if the distribution across the 22 factors is unusual, we can look at the distribution of Fisher p-values across the 22 factors across the 10,000 permutations.
For example, below is the permutation distribution of minimum p-values from among the 22 factors.
@K_Sheldrick This shows the expected distribution of minimum Fisher p-values across the 22 factors, conditioning on their marginals, assuming no association, and that these 22 factors were not selected from a larger subset to be more similar across randomization groups.
@K_Sheldrick The actual minimum Fisher p-value across these 22 factors is 0.409, which is unusually large -- only 3/10,000=0.0003 of the permutation distributions had a minimum p-value that large.
So, this is definitely evidence enough to reject the specified null hypothesis.
@K_Sheldrick If we look at the 2nd smallest p-value, 0.486, we can compare with the corresponding permutation distribution, and find only 32/10,000=0.0032 are at least this large, again providing strong evidence that the null hypothesis is not true:
@K_Sheldrick This is also true if we look at many of the other ranks.
Of course, the small sample sizes in some of the cells lead to an artificially high number of p-values of exactly 1.0, but this permutation distribution takes these artifacts into account.
@K_Sheldrick As Sheldrick suggests, this does suggest something unusual here, since we would expect randomization to not produce groups this well matched by random chance alone.
@K_Sheldrick However, I would not be confident making claims of fraud on the basis of these results -- there are other possible explanations:
@K_Sheldrick 1. Stratified randomization: if randomization were stratified by some factors, even factors not in this table, the association of those factors with the ones in the table could produce groups more alike than expected under unconstrained randomization.
@K_Sheldrick 2. Table does not present all factors considered: If the 22 factors represent not all factors for which they tested for comparability, but rather a select subset, this changes the context & could lead to a table like this in which the groups are more alike than expected by chance
@K_Sheldrick For example, suppose they looked at 40 factors, only presented these 22, with the other 18 not presented being less balanced. This type of selection would change the expected distribution of Fisher p-values across factors, as well, making them overrepresented for large p-values.
@K_Sheldrick This would not be good practice either. I am in no way accusing the authors of this, but hypothetically this would be another explanation for these unusual results.
@K_Sheldrick So, in conclusion, I agree with @K_Sheldrick that this table suggests something unusual - that the groups are unusually well-balanced across these 22 factors for unconstrained randomization - but I am not willing to infer the study is fraudulent on the basis of this result alone
@K_Sheldrick BTW, to be more precise, I did these calculations via Monte Carlo simulation conditioning on the estimated marginals as the truth when simulating the contingency tables under the specified null distribution.
@K_Sheldrick These results are even more unusual when we consider that since these were observational cohorts one would expect the groups to be even more different than under randomization, again unless they did matching or presented only subset of factors matching closely
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Some may ask:
"If myocarditis rates are higher after infection than vaccination, why did we not hear about myocarditis in 2020, before vaccination?"
Good question. Let's check the data.
CDC study estimated for 12-17yr males the rate of myocarditis within 21d of: 1. COVID infection was 64.9 per 100k 2. 1st dose of mRNA vaccine was 3.3 per 100k 3. 2nd dose of mRNA vaccine was 35.9 per 100k.
Let's consider how many cases of myocarditis we'd expect in 2020/since
From the report from the American Academy of Pediatrics (AAP), we can see that through 12/31/20, the number of children with confirmed COVID-19 infections was 2,128,587, which was 12.4% of the total number of cases in the USA 17,137,295.
Yesterday CDC released a report tracking rates of cardiac issues (myo/pericarditis) after infection or mRNA vaccination combining information across 40 health systems in the USA, stratified by age/sex. cdc.gov/mmwr/volumes/7…
The study included data from 40 health systems in the USA from 1/1/21-1/31/22, and stratified cases of myocarditis/pericarditis within 7/21 days after COVID-19 infection or vaccination, with vaccination split out by after dose 1, dose 2, unspecified dose, or any dose.
Results were split out by age groups 5-11yr, 12-17yr, 18-29yr, and 30+yr, and sex. The samples sizes were ~800k infected, 2.5m 1st dose, 2.5m 2nd dose, 1.7 unspecified dose, and 6.7m any vaccine dose. Here are the demographics of the different groups.
On Monday, Nature Medicine published this paper online showing results Moderna vaccine effectiveness vs. infection or hospitalization, split by 2-dose or 3-dose, as well as time since vaccination, using a test-negative matched case/control design.
Transmission is studied by looking at Secondary Attack Rate (SAR) using contact tracing, here within households tracking 6397 secondary infections within 11,937 Danish households.
The headline result is that for unvaccinated, Omicron has similar transmission as Delta (1.17x), while vaccinated and boosted are much more likely to transmit with Omicron than Delta (2.61x and 3.66x, respectively)
Many in the public consider science a set of incontrovertible facts, secret knowledge, dispensed out to the public like a medieval priest to the congregation.
The perception of science is far off what it is ...
Is watching the 1984 Ghostbusters movie killing people?
Recent data show death rate of 10-59yr olds who have watched the 1984 Ghostbuster Movie is 2x higher than those who have watched the 2021 Ghostbuster movie
I don't know how to explain this other than movie-caused mortality
I was told by everyone that the 1984 Ghostbusters movie was safe and amusing, and never anticipated it could be so dangerous to young people!
It appears the 2021 film is MUCH safer, and strongly preferred, but this fact has been hidden by conspiracy
Digging further, I noticed the probability of seeing 1984 film is higher for those older in the spectrum, while demographic of those watching the 2021 film is much younger.
I guess that makes sense, since for GenX'ers the 1984 movie came out in their childhood and teen years.