I4R Profile picture
I4R
The Institute for Replication (I4R) works to improve the credibility of science by promoting and conducting reproductions and replications.
Feb 24 24 tweets 5 min read
After being alerted about possible misconduct, the I4R are reproducing published papers that use data from a specific NGO (GDRI). This thread releases the first 2 reports and provides more information about the work and responses/statements from authors journals and journals. 🧵 Image
Image
This effort was only possible because these journals require data and code repositories. This allowed us to compare data across papers and identify undisclosed connections between 15 studies (e.g., overlapping samples, overlapping interventions, reused treatment assignment). Image
Feb 19 5 tweets 1 min read
Abel: I am writing in my capacity as Chair of I4R. Last year, I was contacted by a researcher that alerted me of potential scientific misconduct in a set of papers all using data from a specific NGO.🧵 The concerns raised were sufficiently credible, so we identified all papers in question with available data repositories and assigned each one to a team of 3-5 researchers with prior experience doing reproduction for I4R. These were papers published in high ranking journals.
Jan 22 12 tweets 3 min read
New research alert! Our study investigates the effectiveness of human-only, AI-assisted, and AI-led teams in assessing the reproducibility of quantitative social science research. We've got some surprising findings! Image 288 researchers (profs and graduate students) were randomly assigned into 103 teams across three groups: human-only, AI-assisted, and AI-led. The task? To reproduce results from published articles in the social sciences. How did each fare?
Oct 30, 2024 11 tweets 2 min read
We had games in Munich earlier this week. 60+ participants reproducing 15 studies (3 econ, 4 poli sci and 8 psych articles)!

Some papers did not reproduce, missing data/codes, 2x revealed identity of participants, errors, etc.

A couple of thoughts from our chair (AB). 🧵 Image 1-Let's start with the obvious. Psychologists' views on reproducibility are VERY different than economist/pol scientists. The latter quickly run the codes, check for coding errors and spend hours thinking about rob checks....
Sep 2, 2024 10 tweets 3 min read
🧵 In our latest DP "Job Market Stars," our chair AB, @Lamiis_k, and Marco Musumeci dive deep into the determinants of academic success in the economics job market, with a focus on p-hacking. Here’s what they found.
👇 Image 2/ Data: They analyzed 604 Job Market Papers (JMPs) from economics PhD candidates across 12 top ranked universities from 2018-2021. The goal? To uncover the factors that influence who lands those coveted academic positions.
Apr 8, 2024 24 tweets 6 min read
Our first meta paper is out!! This paper combines our first 110 completed reproductions/replications. This is joint work with 350+ amazing coauthors.
We summarize our findings below.
econpapers.repec.org/paper/zbwi4rdp…
Image Our focus is on articles published in leading economics and political science journals (2022-onwards). These journals all have a data and code availability policy and most have a data editor. Keep this in mind when reading this thread. Image
Oct 12, 2023 5 tweets 1 min read
Another new DP, this time from @RyanMcWay, Karim Nchare and Pu Sun who looked at Bold et al. (2022, AER) Market Access and Quality Upgrading: Evidence from Four Field Experiments.

econpapers.repec.org/paper/zbwi4rdp… Bold et al. (2022b) investigate the effect of providing access to a larger, centralized market where quality is rewarded with a premium on farm productivity and framing incomes from smallholder maize farmers in western Uganda, using a series of RCTs and DID.