In advance of the Danish mask study's expected publication, Sarah Wieten (@SarahWieten), Emily Smith (@DrEmilyRSmith), and I have written our concerns about its framing and design.
Our comments have been sent to DMJ editors, and are available on PubPeer.
We have reached out via e-mail twice to the DMJ since September 8 with regards to our letter to the editor, and have not yet received a response.
An update: We have gotten in touch with the DMJ editors, and expect our letter to be published in the near future.
Strangely, the rumors of the trial results being published imminently appear to be inaccurate; we have seen no such publication.
The DMJ editors have informed us that they are waiting for Bundgaard et al to draft a response letter, expected at the end of October, at which time they will publish our letter and their response simultaneously.
Bumping this thread, as it appears as though public interest in this study is increasing rapidly.
I want to emphasize: We have no connection to this study except through publicly released registration and design description, which we sincerely thank the authors for providing.
We have not seen results; have not had contact with any journal editors with regard to this study except our letter (below); have no inside or private information about its results or design details; have no financial, institutional, or social links to this study, etc.
We strongly recommend reading our short (500 word) critique in full, as we address several distinct issues with the study design.
It will hopefully appear in DMJ soon alongside a reply from the study authors, which we look forward to reading.
I should note that - while being underpowered is a serious concern - the most important issues are with the intervention as implemented (messaging + free masks), outcome (not source control), framing, and mechanisms which bias results toward the null, as outlined in the letter.
Minor update: We found and corrected a small error (wrote "odds" instead of "risk," and yes, those who have worked with or followed me might note the irony in that mistake).
It's now corrected.
Medium-sized update: The editors at DMJ have informed us that our letter and the response letter from the DANMASK-19 author team is expected to be published next week.
We look forward to reading the response and thank the authors for their engagement in this complicated issue.
Our letter and the response are now published in DMJ. We sincerely appreciate the authors and DMJ's editors for engaging with the complicated topic
The results line up neatly with our predictions, and the limitations sections clearly outline the major issues which we documented in September.
I strongly recommend reading the editorials, which provide substantial and important context.
Note that our only opinion regarding its publication is/was:
"Ideally, this study should be published in such a way that helps ensure that everyone understands the flaws and limitations of its design before they make conclusions on its results." (sent to @SergioEfe on Nov 12)
That appears to have been achieved as well as is possible within the context of our publication system, to the credit of the editors at @AnnalsofIM, the peer reviewers, the authors of the trial, and the authors of the two editorials.
For those who are entering this thread from the middle: we initially wrote and submitted our concerns about its design in early September in response to the published design, months before the results were available.
Folks often say that DAGs make our causal inference assumptions explicit. But that's only kinda true
The biggest assumptions in a DAG aren't actually IN the DAG; they're in what we assume ISN'T in the DAG. It's all the stuff that's hidden in the white space.
Time to make it official: short of some unbelievably unlikely circumstances, my academic career is over.
I have officially quit/failed/torpedoed/given up hope on/been failed by the academic system and a career within it.
To be honest, I am angry about it, and have been for years. Enough so that I took a moonshot a few years ago to do something different that might change things or fail trying, publicly.
I could afford to fail since I have unusually awesome outside options.
And here we are.
Who knows what combination of things did me in; incredibly unlucky timing, not fitting in boxes, less "productivity," lack of talent, etc.
In the end, I was rejected from 100% of my TT job and major grant applications.
Always had support from people, but not institutions.
Ever wondered what words are commonly used to link exposures and outcomes in health/med/epi studies? How strongly language implies causality? How strongly studies hint at causality in other ways?
READ ON!
Health/med/epi studies commonly avoid using "causal" language for non-RCTs to link exposures and outcomes, under the assumption that ""non-causal"" language is more ""careful.""
But this gets murky, particularly if we want to inform causal q's but use "non-causal" language.
To find answers, and we did a kinda bonkers thing:
GIANT MEGA INTERDISCIPLANARY COLLABORATION LANGUAGE REVIEW
As if that wasn't enough, we also tried to push the boundaries on open science, in hyper transparency and public engagement mode.
Granted, we only see the ones that get caught, so "better" frauds are harder to see.
But I think people don't appreciate just how hard it is to make simulated data that don't have an obvious tell, usually because somethig is "too clean" (e.g. the uniform distribution here).
At some point, it's just easier to actually collect the data for real.
BUT.
The ones that I think are going to be particularly hard to catch are the ones that are *mostly* real but fudged a little haphazardly.
Perpetual reminder: cases going up when there are NPIs (e.g. stay at home orders) in place generally does not tell us much about the impact of the NPIs.
Lots of folks out there making claims based on reading tea leaves from this kind of data and shallow analysis; be careful.
What we want to know is what would have happened if the NPIs were not there. That's EXTREMELY tricky.
How tricky? Well, we would usually expect case/hospitalizations/deaths to have an upward trajectory *even if when the NPIs are extremely effective at preventing those outcomes.*
The interplay of timing, infectious disease dynamics, social changes, data, etc. make it really really difficult to isolate what the NPIs are doing alongside the myriad of other stuff that is happening.
The resistance to teaching regression discontinuity as a standard method in epi continues to be baffling.
I can't think of a field for which RDD is a more obviously good fit than epi/medicine.
It's honestly a MUCH better fit for epi and medicine than econ, since healthcare and medicine are just absolutely crawling with arbitrary threshold-based decision metrics.
(psssssst to epi departments: if you want this capability natively for your students and postdocs - and you absolutely do - you should probably hire people with cross-disciplanary training to support it)