Just finished a short talk on transparency horror stories. It was a fun event, but some of the things I've seen over the years have been quite disturbing.
I'll share 2 egregious examples here about editorial decisions & reviews. 🧵
A paper by a student of mine was reviewed in a prominent journal. The anonymous reviews were ok but not great, the editor rejected the paper for confidential reasons.
Really odd that there can be confidential scientific reasons not to publish a paper that authors cannot address.
I followed up with editor, who said they can't help me.
I tweeted about it, & decided that mb this is the 1 in a million scenario, mb there's a good reason, I just can't think of it.
2 days later someone wrote me a DM: they had had exactly the same experience w journal & editor.
A similar situation was when a paper was rejected despite 2 positive reviews. in a diff journal. I appealed (first time in >75 papers; constructive, to the point), and the editor responded that their hands were tied; after all, both reviewers recommended rejection.
One of the reviewers—an early career researcher—had signed their review, so I briefly followed up with them, asking what concerns they had about our work. The review was positive, why reject?
The reviewer sent me a screenshot of their decision logged in system: "minor revision"
These are examples about editorial decisions, and not the end of the world. But they destroyed my trust in specific individuals (I won't submit to these 2 journals anymore until EICs are gone).
I survived—it's just papers. But if these are systemic issues, the lack of transparency & accountability can be weaponized against e.g. early career folks, or based on sexism, racism, or other biases. This is not the optimal way for science to function.
I concluded in my talk that transparency is obviously not a panacea (cf discussions around signing reviews which may offer levers for retaliation esp against early career folks and those with fewer privileges). And transparency is not the only topic that matters in #openscience.
But generally, lack of transparency (data, code, measures) is often harmful, and we should work towards increasing transparency.
End 🧵
Prof Yale Med School just called me "frightening wacko" for reaching out to ECR who signed review. Don't think I interacted w Prof Nitters before so I'm surprised by the response. I see no point debating folks using that language, but I'll happily clarify for everyone else. 1/3
- This was 4 years past my PhD (one ECR reaching out to another)
- I've had mutually friendly interactions w this ECR before for several months, & reached out in friendly & constructive way
- ECR was happy they could demonstrate that editor had misled about their review 2/3
FWIW, I (now 7 years past PhD; some part of that qualified as ECR) have signed around 75 reviews so far. Around 15 authors reached out to me, for various reasons, including clarification. Most interactions were positive, & U didn't think of people as frightening wackos. 3/3
U didn't think = I didn't think*
• • •
Missing some Tweet in this thread? You can try to
force a refresh
2/7 In our own work, we've written about this in detail, too. CF our 2017 challenges paper with @angecramer in which we have a dedicated section on the topic; our 2017 review paper that lists this challenge; and my 2021 Psych Inquiry paper features inference gap as core topic.
3/7 I also briefly went back to the very first workshop I taught 2016, which (like all future ones) had a dedicated section on this problem. So from where I'm standing, the field isn't "finally recognizing" this issue; it's well known, & folks have struggled & grappled with it.
Just finished my keynote at @conference_2021 on "Mental health: studying systems instead of syndromes". You can find slides & new preprint here: osf.io/bm6r5/. Really enjoyed making a completely new presentation from scratch.
🧵
The first barrier to progress I talk about is diagnostic literalism and its consequences: while many of us don't believe in MDD or schizophrenia as "natural disease units" in the world, case-control research in our field is often carried out in that way.
I discuss some historical evidence on how arbitrary many of the categories and thresholds we have today in DSM-5 were, and that DSM-5 may look quite different today if minor things had gone differently.
This means diagnostic categories are not natural kinds.
Dutch universities are making a move to abandon the impact factor in recognition and reward considerations. A group of 170 Dutch academics posted a critical response to this initiative. I summarize why these responses fail to convince me. 🧵
First, for context, here the initiative by @UniUtrecht we are talking about: changing rewards and recognitions. Other universities have similar initiatives.
Here the rebuttal by 171 academics in the Netherlands, most of whom appear to be full professors. It's in NL, but google translate works well for Dutch websites.
1/ National Institute for Health & Care Excellence does not recommend #esketamine to treat #depression bc effectiveness unclear (low quality trials), problematic economic model (short-term treatment, depression lasts long). Cost/benefit not sufficient to recommend treatment.
3/ Agreed that published literature is low quality. Samples are generally too small to draw inferences from the samples to the population; there are recent studies without placebo groups (how does that even get funded in 2020); when placebo groups exist, they are often not >>
"Hans-Ulrich Wittchen .. is under fire after an investigation into one of his studies found evidence of manipulation—and elaborate efforts to cover up the misdeed. The investigation report .. also shows Wittchen intimidated whistleblowers"