Maybe it's just low blood sugar that made them do it?
Or maybe it was to help their friends
I wonder if Dan's been losing any sleep? Maybe that's why he hasn't come forward with an explanation
Maybe he just thinks we're all hypocrites for calling him out
You have to wonder what all this fraud is doing to him on the inside
Though maybe he's just convinced himself that he hasn't been lying to begin with
Fear not, though, for we still might be able to REVISE his behavior
These are all studies that Dan co-authored with disgraced HBS prof. Francesca Gino. All I did to make this thread was go to Google Scholar & search "Gino Ariely". These are from page 1, which also includes this famous study, which both Gino and Ariely are confirmed to have faked.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Another shoe just dropped in the ongoing Dan Ariely scandal: JMR, a top 4 marketing journal, has issued a formal “Expression of Concern” for Mazar, Amir, & Ariely (2008), the infamous Ten Commandments study
🧵
This is Ariely’s most cited paper (4000 cites!), and I’ve long suspected that it would be the next domino to tumble.
The EoC highlights several major issues with the paper: first, a massive replication study of the MAA experiment run in 2018 failed spectacularly.
Second, a forensic data investigation using the data provided by the authors themselves found that “conditions were dropped from experiments 1 and 2 without disclosure,” which is vague but likely entails significant evidence of p-hacking.
This is a cool application of Miller and Sanjurjo’s 2018 ECMA paper overturning the Hot Hand Bias. The basic intuition is that HH subsequences can overlap with themselves, while HT cannot. So in every set of finite coin toss sequences, there are an equal amount of HH and HT… 1/
…subsequences, but HT subsequences appear across more of the sequences because they cannot be “packed in” as efficiently. Thus, in a *finite* set of coin tosses, sequences w/HT are more likely to appear, even though the *expected* number of HH and HT subsequences is the same. 2/
The M&S paper is a banger, and one of my favorites to teach each year in Behavioral. Super fun and counterintuitive finite sample result, with significant implications for the measurement of conditional probabilities. Also, one of my favorite scientific takedowns of all time:
The real issue is not about the original data being missing, though that’s also bad, especially for this prominent of a study. As @AaronCharlton has rightly emphasized, it’s about the fact that none of the original authors “remember” key details of how the data was gathered… 1/
…and that their ostensible “attempts to recall” just happen to involve trying to establish that another academic, who is *not even a coauthor (!!)* supposedly collected the data for them. 2/
+The academic they’re *totally not trying to pin this on* strenuously denies she collected the data, & provided verifiable details of the contemporaneous sampling environment that, if confirmed, would prove that the original data described could not have been collected at UCLA.3/
No English subtitles unfortunately, but some *very* interesting info in this Israeli Channel 13 reporting from last year about the ongoing Dan Ariely scandal.
Ariely apparently left MIT in ‘08 after conducting an experiment that administered electric shocks to undergrads, *w/o IRB approval (!!)*. When confronted, he tried to throw his RAs under the bus. He received a 1 year suspension from running experiments, then moved to Duke. 2/
In the 2010’s Ariely had a contract to deliver ✌️behavioral insights✌️to the Israeli budget office that netted his consulting firm about $5 Million over 4 years. Their reports, which were mostly not made public, included recs like “make the government website mobile accessible”3/
I've been using Mendeley Desktop daily for years, & have an annotated library of thousands of pdfs. Today, I switched to Zotero. Here's a 🧵of tips on how to do this, if you're also tired of Elsevier's BS and want to make the switch w/minimal effort while maintaining workflow.
First off, why switch? In short, Mendeley has been in a severe decline for years. Elsevier already discarded the mobile app, & there hasn't been a feature update in living memory. The straw that broke the camel's back for me is that they're discontinuing the desktop app on 9/1.
Instead of the desktop app we're apparently supposed to use some BS online featureless tool that's completely reliant on storing your library and all its metadata on Elsevier's servers. The game here is clear: create long-run lock-in by ensuring that Elseveier owns all your data.
I guess I have to do a response to the response now, because that's how arguing on the internet works. I'll keep it brief, given that I made most of my points in the original thread.
Let me be blunt in response: this argument is just plain dumb. Just because the infections variable is measured with noise, doesn't mean an analysis employing it is useless. Furthermore, as I discussed before, *deaths are also measured with noise*.
For the noise on infections to really be a concern, you would have to argue that it biases the results somehow. I don't see any argument here for how that could occur. They exist, but not in this thread.