My Authors
Read all threads
Another author removes themselves from a paper associated w #Pruittdata. This is a difficult situation but important convo to have - what are authors to do when they feel like they can't trust the data in their own papers anymore, but a retraction is not (yet) possible/allowed?
I've been thinking about this issue all night and I have some thoughts. What are the guidelines we use to retract/correct papers? Are these the right ones? What do we value in our papers? How do we move forward as a field? A 🧵 1/
(I want to be clear here and say that I am offering my opinion as a professional scientist on a topic of general importance to our field. While my personal experiences may have helped form my opinions, these thoughts are not directly about any particular instance 2/)
From the author's perspective, I think it's safe to say that we live and die by our professional reputations so the feeling that your name continues to be on something that you no longer believe in, would absolutely make my skin crawl. I get why these authors are doing it 3/
Generally speaking, any decisions about whether to retract/correct should be done very carefully and only after intensive examinations of any alleged problems. I don't know if this always happens, but in my experience, 100s of people-hours were invested in every decision 4/
From the editor's perspective, they have the very unenviable job of adjudicating between folks who, as I can imagine, might have some major differences of opinions. In my experience, by and large, they've done the best they can *given the guidelines they must work within* 5/
It’s these guidelines that I think deserve more attention and discussion. COPE guidelines state that papers should be retracted if there is “clear evidence that the findings are unreliable, either as result of major error or result of fabrication/falsification.” 6/
While that may seem clear-cut, there are, as I have found out personally, surprisingly varied ways to interpret this. I feel like I have seen interpretations that either follow the *letter* of the law, so to speak, or the *spirit* 7/
All journals say that they uphold the ‘highest principles of scientific integrity’ so I am sure that they would all say that they adhere to the spirit of these guidelines, which is to pull any paper where credible concerns about its validity haven't been sufficiently addressed 8/
But the problem is that these guidelines can be interpreted to the letter – in that, the findings, the STATISTICAL RESULTS, are what need to be shown to be unreliable. This means that whether the biological conclusions of a paper ‘hold’ becomes the determinant of retraction 9/
This narrow interpretation of COPE guidelines is likely useful if the concerns about a paper are on whether the authors' used the right analytical method, or someone found a mistake in their code, or something methodological 10/
But if there are concerns about problems in the raw data, then this focus on the results seems premature. Results are only as good as the methods and the data that they stand on. You can’t have good results with good data. You can’t have good data without good methods. 11/
I think it's this dichotomy that has folks frustrated – if the guidelines state the findings are what matter, but the concerns are about the data, then any focus on the results feels like the “sexiness/impact” of the paper is being placed in front of its scientific integrity 12/
(Aside: this is also why I hate when glam mags put the results of the paper before the methods – the results of any paper are only as good as the data/methods they stand on so its ridiculous that they get relegated to essentially a bystander position) 13/
It doesn’t matter where problems in the raw data come from – if credible concerns can be demonstrated, then they need to be explained. If they can’t be sufficiently explained then the results should be completely irrelevant. 14/
In the *spirit* of COPE guidelines, I would think that this would also constitute ‘unreliable findings’. The problem is that when there are differences of opinion on the interpretation of things, the stricter definition is often more easily defensible and so often wins. 15/
Guidelines are meant to be written vaguely enough to capture the spirit of the law, but also try to be specific enough to make sure that the letter of it would hold with as few loopholes as possible. And it’s hard to find the loopholes until someone tries to use them. 16/
So what to do? The short answer is, I don’t know. But it seems that having a discussion about what these guidelines are and whether they fully capture both the spirit and the letter of the law as we want them too is worthwhile. 17/
I think this discussion is critical, especially as an ECR because I want to help shape the guidelines that our field follows moving forward. I feel like we need to decide: what do we value in papers? And how do we ensure the integrity of whatever we do value? 18/
One thing seems clear to me, and I’m obviously not the first person to say it, but we need to value more than just the findings of a paper. 19/
I think a few things can help us shift the culture to increase reliability & reproducibility in our work: 1) raw data, in a readable format, needs to be deposited publicly (bar exceptional exceptions of course). I pledge to do this with every one of my papers from here on out 20/
2) Statistical code needs to be uploaded as supplemental. Let’s learn from each other! No need for all of us to keep re-inventing the wheel & struggling to figure out just the right syntax when others have already done this. This is something I’ll be doing from now on too 21/
3) More replication! Finding a sexy result the first time is cool and exciting, but I think finding it the second time (in a replication) is when we can take is seriously. Let’s value those second (third, fourth) examples of a cool result! 22/
Along those same lines, most folks start their graduate careers with small experiments designed to let us ‘get our feet wet’. What if these 1st-year experiments were actually designed as replications of other studies? 23/
Students would identify which studies get them excited, tweak the design to fit their system, and then see how general the previous results were. Seems like a win-win for both the students and broader community 24/
So many people care so deeply about the integrity of what we do and I just continue to be so impressed, and optimistic, about the direction we’re heading as a field. Thank you for letting me a part of it :) end/
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Kate Laskowski

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!