It's helpful to make claims about likely effectiveness quantitative. Instead of "boosters likely work", how about "boosters likely reduce hospitalizations/etc by at least X%"?
Instead of "masks should work in schools", "I think masks reduce cases in schools by at least X%".
1/5
A claim you think something works, without quantifying how well you think it works, is barely even scientific, since it fails to be falsifiable; one can always disclaim negative findings as underpowered to find smaller and smaller effects.
2/
Being quantitative about beliefs is prerequisite to studying them well. If you believe an intervention may have a 90% benefit for a measurable target, observational studies may be able to lend strong support, even if we might look to large RCTs to support an expected 10% effect.
Just as importantly, being quantitative about what we believe can clarify the real sources of disagreement.
Do persons A and B disagree about an intervention disagree irreconcilably about the range of possible benefit? Or the range of possible harm? (Or neither?)
4/
By being more quantitative, we can better understand our disagreements, be better equipped to study them scientifically, and better understand our own beliefs.
5/5
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Everyone should look at the remarkable work done in this cluster randomized trial.
They found that an intervention which increased surgical mask uptake in community settings significantly reduced SARS-CoV-2 infection among older adults.
Most people's 1st tendency is to claim studies like this support what they already knew. In reality, the study had specific and not necessarily intuitive findings.
The study even collected predictions from experts, and found that they failed to predict the study outcomes!
2/6
For example, the study found that increasing mask usage had statistically significant effect on SARS-CoV-2 infection.
But these results were driven by surgical mask use, and by reductions in infections in people over 50.
A brief reminder that CDC mask guidelines start at age 2.
At the time of this writing, the MMWR with data had 74 retweets and the MMWR on that one time this one crazy thing happened had 1.1K retweets, many from serious people claiming that this report affected in some fundamental way our understanding of COVID-19 risk in schools. 2/
Data is boring and stories seem compelling. But scientists and public health agencies should be actively working against the natural tendency to give greater weight to outlier incidents than data-driven understanding of risks.
3/
One idea that has not been discussed much is the question of whether regulators have a special role to play in deciding when coercive measures can be used to increase uptake of vaccines.
In practice the hurdle for this has just been EUA, not even full approval.
1/
I think it is worth thinking about what principles should guide the decision of when coercive measures are ethically appropriate and whether regulators should play a role in adjudicating when that bar is crossed.
2/
The current situation is that we allow mandates even in cases where no clinical trial has weighed the direct individual risk/benefit (e.g., mandates for individuals with confirmed previous infection. This may also be the case soon for boosters).
Needless to say, politicians that have already made the decision to push forward with early boosters have an incentive to sell this decision to others as a wise and prudent one. Indeed, after implementing this decision, trials would not serve a helpful political purpose.
2/
But for the actual people who will receive boosters (let alone those who might have received a 1st dose of vaccine had the dose not been used as a booster in the U.S.) questions about whether boosters actually have any real clinical benefit (vs small risks) are crucial.
3/
We should always be cautious not to inappropriately infer causality from correlations via confirmation bias.
But this survey-based paper, which did not measure in-school cases (let alone transmission) did not even find significant *correlations* with student mask policies. 1/
The authors of the paper do not even discuss effects of student masking in the body of the paper.
Note that if we were really determined to infer direct causality from these results, one of the strongest findings would be "desk shields cause infection in household contacts".
2/
Masks in schools are controversial and I can understand why some would look at uncertain evidence and think that plausible benefits make it something worth trying.
But it is not scientific to overstate what we know about this intervention.
3/
This decision seems to reflect a common fallacy; trying to reason about whether we "need X" just based on what things are like without X, rather than how much X will do to help.
It's surprising to see it at play in a decision of such consequence.