Folks have been curious about what I found replicating papers

While I believe in showing grace, respect, and due scientific process before naming names, here is a (slightly snarky) thread on 10 things I’ve learned through #replication

Remember we are all prone to error (1/12)
1.Coding errors do affect published results.

Those that lower p-values seem to be harder for authors to debug. (2/12)
2. Extra significance stars sometimes show up on key estimates.

Maybe that’s where falling stars go. 💫‍ (3/12)
3. Regressing A + B = Y on B when B has a ton of measurement error can produce a very robust coefficient of 1 on B.

It’s good to tell readers that Y includes B (4/12)
4. If the RHS of a regression includes an interaction A*B, it should usually include main effects for both A and B alone

This is always good practice, even if the author doesn’t fully understand why. (5/12)
5.If you add a bunch of weak instruments to a strong instrument, you don’t get a meaningful under-identification test.

It’s nice to show the key first-stage regression coefficients somewhere in the paper. (6/12)
6. Being hyper-vigilant about confirmation bias is a good idea when hand-selecting data points out of various sources.

Observations that conflict with your preferred hypothesis are just as real as ones that don’t. (7/12)
7. The main conclusions of an article rich in mathematics can be dramatically altered by simple conceptual errors made when wrapping things up.

Try not to to fall in the last mile of the marathon. (8/12)
8.We have known the pitfalls of splitting samples and multiple hypothesis testing for a while, yet many editors don't do much to check on these.

HARKing ("Hypothesizing After Results are Known") seems to crop up often in these situations (9/12)
9. The more we want to believe in a clever-sounding or politically expedient hypothesis, the lower the bar for the evidence.

Be most skeptical of ideas you want to believe, as you will otherwise entertain worse evidence in their favor. (10/12)
10.When authors are told about mistakes they can often be gracious, and sometimes in a hurry to defend themselves privately.

They rarely seem to be in a hurry to correct even agreed-upon errors publicly. (11/12)
10+.Many replication issues are more gray than black and white.

Science is supposed to be collaborative and a public good.

Unfortunately, the comment-reply system is overly turf-ridden and adversarial.

I hope we can all share credit and good will with each other (12/12)
I meant "over-identification" not "under". I knew there would be an error somewhere! ;)

Twitter, just liked published articles, would benefit from a streamlined errata system!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with David Albouy 🇺🇸🇫🇷(🇨🇦)🌐

David Albouy 🇺🇸🇫🇷(🇨🇦)🌐 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!