Profile picture
Andrew Althouse @ADAlthousePhD
, 48 tweets, 9 min read Read on Twitter
I've been a mere spectator to the Wansink scandal, but I think the cautionary tale is worth amplifying across the fields of statistics, medicine & the entire research community. Thus far, the discussion *seems* mostly confined to psychology researchers & some statisticians.
I think it’s important to spread this story across all research for those who may not be aware i) what has happened and ii) why it’s a big deal.
I’m going to link several tweets, threads, and articles at the end of the thread. In the first 11 Tweets, here is the “short” version for those unaware of the Wansink scandal:
1) Borderline-famous Cornell professor puts up a blog post where he throws one of his post-docs under the bus for not working hard enough while praising an unpaid visiting scholar for doing as he asked (note: this is not really about the visiting scholar)
2) In doing so, kind of outed himself for encouraging some shoddy research practices (if you must have one term “severe p-hacking” is probably the best description, although a number of other troubling themes would emerge later).
3) Some clever folks (@Research_Tim, @jamesheathers, @sTeamTraen, @OmnesResNetwork) started digging into Wansink’s papers and found an ever-growing list of problems
4) They contacted him directly with their concerns. Some dialogue ensued. Eventually he stopped replying to their emails.
5) They contacted the journals who had published some of the papers. Not much happened (initially). But the Human Scum Society (will explain later…) knew they were onto something.
6) Having approached both Wansink and the journals, with little to show for their efforts: they went public with the first batch of findings that indicated Wansink’s lab was producing shoddy work.
7) BuzzFeed reporter @stephaniemlee dug more deeply into the lab’s inner workings, and via emails & discussion with students and alumni, found a basic pattern of torturing data until it confessed as well as headline-chasing.
8) As a statistician, one of the most offensive findings was this statement about a p-value of 0.06:
“It seems to me it should be lower,” he wrote, attaching a draft. “Do you want to take a look at it and see what you think. If you can get the data, and it needs some tweeking, it would be good to get that one value below .05.”
Some of the other p-hacking might be explained as simple sloppy HARKing. This quote, for some reason, strikes a bigger nerve with me because it’s an explicit admission that they would go ahead and twist the dials however possible to make a p=0.06 into a =<0.05
9) In a vacuum, one or two or even a few of the things named in Lee’s first article could be chalked up to carelessness. People make mistakes. An accidental n=47 instead of n=46 can happen. Stuff like that. But this was an awful lot of smoke to be just “mistakes”
10) Necessary disclaimer: this has nothing to do with Wansink’s personal demeanor or life outside academia. He may well be a very nice person. I thought in most of the media coverage of this scandal, he came off more naïve than nefarious, although others may feel differently.
11) I can even forgive the publicity-chasing (“we want this to go viral” and similar quotes). I’d prefer the focus be on doing good science versus getting headlines, but I get it
People want their work to get attention (and that can lead to more funding, which in turn lets you do more work). It would be nice if that attention was earned honestly, though.
Eventually this all became part of the other ongoing discussions regarding the state of research methods (although, interestingly, it seems to get much more attention in psychology than elsewhere in medicine, which is why I feel this is important to share the story)
Many traditional academics are perturbed by the behavior of the so-called “data thugs” - calling them such names as “human scum” and “vindictive little bastards” (curious, that this is the side of the aisle calling these guys out for *their* tone)
Personally…I think fretting about people’s tone rather than the content of their message is a weak effort to undermine their credibility and distract from the harsh truth that the “data thugs” were/are right about much of what they’re saying
To people who don’t like that they went public, it must be repeated endlessly:

“Why didn’t you write to Wansink directly?” – they did.

“Why didn’t you just write letters to the journal?” – they did.
In this whole affair, there was no going public until any reasonable hope that things would be rectified through “traditional academic niceties” like “email the PI” and “write a letter to the journal” had long been exhausted.
A few folks have made a fine point which shouldn’t be lost in all of this: just because Wansink’s papers are retracted doesn't necessarily mean that his ideas are wrong - they just don’t provide good evidence of his ideas.
If someone wants to retest them using well-designed methods and pre-registration of analysis plans, that would be a great idea, assuming one can obtain funding to do so.
A few takeaway thoughts of my own from the Wansink scandal and what it might mean for the entire academic research enterprise moving forward:
1) We’ve been talking about reproducibility crisis and statistical issues and p-hacking (and whether we should even use p-values at all). Precious little has changed in the majority of the literature.
2) These guys found a problem, and did something about it. After they tried the traditional academic path - email the PI, write letter(s) to the journal(s) – nothing happened. They didn’t stop; they went public; and they have been proven right.
They’ve probably made some enemies in the process, but (IMO) most of their critics are living in truth-hurts mode. If they get blackballed from academic for this, that’s a serious “shame-on-us” as a field.
They also accomplished more to get people talking about research methods than I could have with 100 nice professionally written letters to the editor commenting on questionably applied statistical approaches.
3) This reinforces a growing belief of mine that Twitter has a meaningful role to play in the post-publication peer review process. Letters to the editor usually take awhile; they are subject to filtering and/or censorship; and they usually are met with a polite non-response
“We thank you for taking the time to read our article. Despite your calling out a perfectly accurate flaw in our work, we are choosing not to change it. Have a nice day, and don’t forget to buy an ice cream cone on your way out.”
4) Also related: if someone publishes an article, we are allowed to talk about it in public places, regardless of whether the author is present or not.
5) “The author isn’t on Twitter” doesn’t mean we aren’t allowed to talk about the article on Twitter.
6) Extending that logic to its natural conclusion means that we also aren’t allowed to talk about the article when the author isn’t in the room, right?
7) One other astonishing quote from the same article in which they were called Human Scum that merits attention and thought:
“Psychologists used to talk about their next clever study; now they fret about whether their findings can withstand withering scrutiny.”
8) I don’t know about you, but I think it would be a good thing if people worried *from the jump* whether their work could stand up to withering scrutiny
9) Sorry that I’m not sorry that you might have use more rigorous and appropriate methods now?
10) I understand the desire to avoid turning the field into backstabbing and “gotcha” discussions, but seriously, maybe that means we should all slow down and do a better job with methods and design on the front end?
Also worth mentioning: The thugs "lump people who falsify data and plagiarize someone else’s work with someone who makes small mistakes," he said. "They act like they’re all the spawn of the devil."
11) I’ve not interacted with the Data Thugs outside of Twitter, but I do not believe this is an accurate characterization from what I have observed. YMMV, of course.
12) But regardless, they have started a very important conversation, and it is important to keep that discussion going.
13) If we want better science, sometimes things have to change. “That’s the way we’ve always done it” is never a good reason to keep doing things the same way. Progress hurts sometimes.
Anyways, here’s more of the promised history and perspectives on all of this:
Here’s the “Wansink Dossier” from @Research_Tim:
timvanderzee.com/the-wansink-do…
Here’s the first article from @stephaniemlee on the subject:
buzzfeednews.com/article/stepha…
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Andrew Althouse
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!