After being alerted about possible misconduct, the I4R are reproducing published papers that use data from a specific NGO (GDRI). This thread releases the first 2 reports and provides more information about the work and responses/statements from authors journals and journals. 🧵
This effort was only possible because these journals require data and code repositories. This allowed us to compare data across papers and identify undisclosed connections between 15 studies (e.g., overlapping samples, overlapping interventions, reused treatment assignment).
The replication effort has also identified multiple other potential issues in these papers which are detailed in the individual reports. Author responses are mixed, with some asking for withdrawing their name while others are working on responses.
We are working with the editors at multiple journals - some of which have opened their own independent investigations. So far, we only have positive things to report about how journals have handled our reports. Well done across the board!
1st report reproduces “Improving Women's Mental Health During a Pandemic'' by Vlassopoulos et al. @AEAjournals. Our full report is available here: . osf.io/pvkhy/
Authors received this report on Jan 15. They did not agree on a response. Some released statements, while others provided a short response. Full report, response/statements are here: .osf.io/pvkhy/
We did not get a full response as of now. As requested, we note that “the other authors are currently working on a detailed response to each of the two reports.”
2nd report reproduces “Raising Health Awareness in Rural Communities: A Randomized Experiment in Bangladesh and India'' by Siddique et al. @restatjournal. Authors received our report Feb 6. See same statements from some of the authors and full report here: osf.io/c3k6f/
Let’s take a moment to thank all our amazing replicators: Jörg Ankel-Peters, Juan Pablo Aparicio, Gunther Bensch, Carl Bonander, Nikolai Cook, Lenka Fiala, Jack Fitzgerald, Olle Hammar, Felix Holzmeister, Niklas Jakobsson, Anders Kjelsrud, Andreas Kotsadam, Essi Kujansuu, ...
Derek Mikola, Florian Neubauer, Ole Rogeberg, Julian Rose, David Valenta, Matt Webb, and Michael Wiebe, and Bangladeshi colleagues who wish to remain anonymous.
We have 2 additional completed reports which have been shared with some of the same authors. We will make those publicly available shortly and keep you posted
(1) “Parent-Teacher Meetings and Student Outcomes: Evidence from a Developing Country'' by Islam (2019) Euro Econ Review
(2) “Partisan Effects of Information Campaigns in Competitive Authoritarian Elections: Evidence from Bangladesh” by Ahmed et al. (2024) Economic Journal @EJ_RES
We are currently reproducing: “Delivering Remote Learning Using a Low-Tech Solution: Evidence from a Randomized Controlled Trial in Bangladesh'' by Wang et al. JPE: Micro @JPolEcon.
Our report should be completed by Wednesday. It will then be shared with the authors/editors.
The editors at JPE M have requested additional materials which have been shared with us. Again, only positive things to report on how editors have handled our reports.
The following paper was accepted at JEEA (@JEEA_News): “Centrality-Based Spillover Effects: Evidence from a Randomized Experiment in Primary Schools in Bangladesh” on Feb 5th. Two of the authors also coauthored the AEJ report (and one the Restat report).
The editor in chief at JEEA told us that the lead author contacted the journal on February 17th to formally withdraw their paper.
We are now trying to reproduce all GDRI studies. We started with those with a replication package. This explains why we release reports for studies in AEA AE, Restat, EJ and EER first. Those are amazing journals which enforce their data and code availability policy.
Again - we stress that our reports were only possible because these journals required the authors to make underlying study data available. These policies are essential to ensure that empirical results are reproducible.
One study often reused and discussed in our AEJ AE report is “Early childhood education, parental social networks, and child development” by Guo et al. This paper is not published yet. The authors reported to us that they withdrew their paper from publication consideration.
Another study discussed in our AEJ AE report is “Food insecurity and mental health of women during COVID-19: Evidence from a developing country” at PLOS One @PLOSONE. We shared our AEJ AE and Restat reports with the authors this Friday.
The lead author (TR) and Md GH reached out to PLOS One on Saturday to withdraw their authorship from the article. We will provide a new report to PLOS One and the authors tomorrow.
We will provide regular updates about our work and editors’ decisions. We are pretty certain we will have an update for one article very shortly as the editor in charge is moving fast and already confirmed all our findings.
HUGE thanks to our 15+ replicators who have been working on this project for 3 months now. They are not paid, and this is all pro bono. Their work should get published as comments! It has been a pleasure working with you all. And somehow this feels like the beginning!
More soon!
#I4R_R4Science
#EconTwitter #EconSky
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Abel: I am writing in my capacity as Chair of I4R. Last year, I was contacted by a researcher that alerted me of potential scientific misconduct in a set of papers all using data from a specific NGO.🧵
The concerns raised were sufficiently credible, so we identified all papers in question with available data repositories and assigned each one to a team of 3-5 researchers with prior experience doing reproduction for I4R. These were papers published in high ranking journals.
I4R has a standard protocol, where we first send draft reports to the original authors to allow them a chance to clear up misunderstandings and respond before we make the comments public. 4 draft reports from this project have now been emailed to the authors.
New research alert! Our study investigates the effectiveness of human-only, AI-assisted, and AI-led teams in assessing the reproducibility of quantitative social science research. We've got some surprising findings!
288 researchers (profs and graduate students) were randomly assigned into 103 teams across three groups: human-only, AI-assisted, and AI-led. The task? To reproduce results from published articles in the social sciences. How did each fare?
Surprisingly, human-only teams matched the reproducibility success rates of AI-assisted teams. Both of these groups significantly outperformed AI-led approaches, with human teams achieving a 59 percentage points higher success rate than AI-led teams.
We had games in Munich earlier this week. 60+ participants reproducing 15 studies (3 econ, 4 poli sci and 8 psych articles)!
Some papers did not reproduce, missing data/codes, 2x revealed identity of participants, errors, etc.
A couple of thoughts from our chair (AB). 🧵
1-Let's start with the obvious. Psychologists' views on reproducibility are VERY different than economist/pol scientists. The latter quickly run the codes, check for coding errors and spend hours thinking about rob checks....
Psychologists instead spend most of the day figuring out if all data/codes are provided, run codes and then compare pre-registration vs analyses conducted in the article.
Basically, one group cares about rob checks, while the other cares about pre-registration.
🧵 In our latest DP "Job Market Stars," our chair AB, @Lamiis_k, and Marco Musumeci dive deep into the determinants of academic success in the economics job market, with a focus on p-hacking. Here’s what they found.
👇
2/ Data: They analyzed 604 Job Market Papers (JMPs) from economics PhD candidates across 12 top ranked universities from 2018-2021. The goal? To uncover the factors that influence who lands those coveted academic positions.
3/ Placement: The most common placement is Assistant Professor (AP), secured by 37% of the candidates. Another 25% remains in academia with a post doc position. Around 39% of the candidates leave academia, with most of them obtaining a job in the private sector.
Our first meta paper is out!! This paper combines our first 110 completed reproductions/replications. This is joint work with 350+ amazing coauthors.
We summarize our findings below. econpapers.repec.org/paper/zbwi4rdp…
Our focus is on articles published in leading economics and political science journals (2022-onwards). These journals all have a data and code availability policy and most have a data editor. Keep this in mind when reading this thread.
All our replicators are coauthors of this study. Replicators are PhD students, postdocs, faculty, or researchers with a PhD. On average, the typical article has 2.6 authors. Teams of replicators reproduce/replicate one article each. There are 3.25 replicators per team on average.
Another new DP, this time from @RyanMcWay, Karim Nchare and Pu Sun who looked at Bold et al. (2022, AER) Market Access and Quality Upgrading: Evidence from Four Field Experiments.
Bold et al. (2022b) investigate the effect of providing access to a larger, centralized market where quality is rewarded with a premium on farm productivity and framing incomes from smallholder maize farmers in western Uganda, using a series of RCTs and DID.
The replicators successfully computationally reproduce the results using the publicly provided replication packet.
Then they test the robustness of these results by re-defining treatment and outcome variables, testing for model misspecification and the leverage of outliers.