A few things he doesn't mention is that the Stage 3 trials only took around two months to reach statistical significance and could have been approved in early Oct. instead of mid-December, particularly minus the 5-6 week data review by the FDA.
Second, one could have done the same trials, but layered them a bit so that we get through them all a lot faster. I.e., start Stage 2 before stage 1 approval is complete, and take a large control group from Stage 1 and make it into part of the Stage 3 group, etc., so we get
statistical significance faster. Also, once you do 500 stage 2 injections and see no harmful side effects, they could have commenced immediately with the first (small) batch of stage 3 injections, and then continued only if no complications arise.
As I understand, the stage II participants from June did not show up in the stage 3 trial, which only began the end of July for Moderna.
Since, in stage II, you are still worried about safety, you could make a large control group to increase statistical significance without putting any more people at risk.
And, of course we should do human challenge trials as well.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Re: Pfizer vaccine. It's clear something big happens around day 11. Nothing of import happens around day 32. #firstdosesfirst
Pfizer does a ridiculous time split when they look at one shot vs. two. First, there's no statistically significant difference in the first 7 days after the 2nd shot. Had they used an 11 day cut-off instead, one shot would clearly have won.
And, one shot would also likely have won had they looked within a 7 day window in either direction of shot #2.
Good news! I just got my first "Top 5" journal publication, in the Review of Economic Studies. Top 5 journals in econ are fetishized beyond belief. This publication helps show why we might want a more balanced attitude. #Econtwitterideas.repec.org/p/abo/neswpt/w…
In our paper, we find several coding errors which overturn a seminal paper in the "Chinese competition caused a huge increase in innovation" lit by Bloom et al (BDvR). From the beginning, I knew the BDvR paper had problems. Why? nbloom.people.stanford.edu/sites/g/files/…
Well, first I should say that I am actually somewhat empathetic about the coding errors, common in published papers--a reason we need replication. Any empirical researcher will make mistakes on occasion. But, there is a lot to chew on in this case besides the coding errors.
A bit of good news, after 19 submissions spanning six years, two months, two weeks, and four days, my job market paper on the collapse in US manufacturing has been accepted for publication in the European Economic Review. Here's a thread on the topic. 1/
The original title was "Relative Prices, Hysteresis, and the Decline of American Manufacturing", which later got shortened, with the most recent draft up on Ideas/Repec here: ideas.repec.org/p/cfr/cefirw/w… 2/
The tale of US manufacturing in the 2000s itself is an old bedtime story some economists tell when trying to scare their children. So, listen up. 3/
It's official, the Review of Economic Studies does not accept comment papers (unless they point out actual factual errors), according to my correspondence with an editor. In important ways, it's not really an academic journal. @RevEconStud#EconTwitter#EconReplicationCrisis
This isn't hard. If the RESTud doesn't accept comment papers, then people who do publish there now have an incentive to sneak papers past the referees which are obviously flawed. And people don't have incentive to replicate papers, since they cannot be published. #EconTwitter
It's no mystery why the editorial board refuses to accept comment papers. The benefit of being an editor is not the pride one takes in leaving one's intellectual stamp on the field, but rather the benefit one gets from giving favors to powerful people in the field. #Econtwitter