Sharpen your intuitions about plausibility of observed effect sizes.
r > .60?
Is that effect plausibly as large as the relationship between gender and height (.67) or nearness to the equator and temperature (.60)?
r > .50?
Is that effect plausibly as large as the relationship between gender and arm strength (.55) or increasing age and declining speed of information processing in adults (.52)?
r > .40?
Is that effect plausibly as large as the relationship between weight and height (.44), gender and self-reported nuturance (.42), or loss in habitat size and population decline (.40)?
r > .30?
Is that effect plausibly as large as the relationship between elevation and daily temperature (.34), viagra and sexual functioning (.38), past behavior predicting future behavior (.39), or sleeping pills and insomnia reduction (.30)?
r > .20?
Is that effect plausibly as large as the relationship between marital relationship quality and parent-child relationship quality (.22), alcohol and aggressive behavior (.23), or gender and weight (.26)?
r > .10?
Is that effect plausibly as large as the relationship between antihistamine and runny nose (.11), childhood lead exposure and IQ (.12), anti-inflammatories and pain reduction (.14), self-disclosure and likability (.14), or nicotine patch and smoking abstinence (.18)?
r > .00?
Is that effect plausibly as large as the relationship between aspirin use and death by heart attack (.02), calcium and bone mass premenopausal (.08), gender and observed risk taking (.09), or parental divorce and child well-being problems (.09)?
In "Psychology’s Increased Rigor Is Good News. But Is It Only Good News?" Barry Schwartz concludes "My aim here has only been to encourage us to acknowledge that there is a price."
Area of agreement: We must examine the impact of new behaviors to improve rigor because there are almost always unintended consequences that might be counter to the aims of accelerating progress.
Disagreement: "There is an inevitable trade-off between the two types of error...The more stringently we reduce false alarms (false positives), the more vulnerable we are to misses (false negatives)."
This is true only when everything stays the same except the criterion.
Context: You might be tempted to post new content on both platforms until it is clear that Mastodon is going mainstream. This WILL NOT work.
Most user behavior is consuming content, not producing it. Twitter has built-in advantage of inertia and audience.
Posting the same content on both gives no reason for the consumer to go to the new platform, and the barriers to moving and rebuilding one's network are high.
So, producers MUST move consumption to the new platform. How?
I favor an all Green OA world with peer review, typesetting, copyediting, etc as microservices, but I don't see an all Gold OA world as being necessarily as bad as others do. A few reasons and would love to have these challenged by others who are thinking about system change...
In an all Gold OA world, price is evident and meaningful for decisions of authors for where to submit. As such, academic publishing becomes an actual marketplace in which the decision-making consumer is price conscious. Therefore, competition on price increases.
The primary price drivers will be (1) prestige/reputation, and (2) access to relevant audiences. Eventually, (3) quality of service will become influential as well. All three are reasonable cost drivers, even though we hate that the first exists.
The positives: The piece has no invective, no misattribution of claims, and represents other perspectives fairly.
You might counter that is a low bar. For hot topics, I disagree. Also, compare the piece with responses to replication circa 2014-2016. This is real, scholarly work.
Also, I agree with most of the intro in which they value: replication, preregistration, transparency of exploration, and caution when findings differ across outcomes/analyses.
Moreover, the paper is clear when the authors are exploring or speculating.
534 reviewers randomized to review the same paper revealing the low status, high status, or neither author. 65% reject low status, 23% reject high status.
Amazing work by Juergen Huber and colleagues. #prc9
Or, look at it another way. If the reviewers knew only the low status author, just 2% said to accept without revisions. If the reviewers knew only the high status author, almost 21% said to accept without revisions.
I thought it was painful to have 25 reviewers for one of my papers. My condolences to these authors for having to read the comments from 534.
In case it is useful perspective for anyone else, here's part of how I managed the downsides as an ECR so that the upsides dominated my experience in academia.
Key downsides that needed managing for me: (a) dysfunctional culture that rewarded flashy findings over rigor and my core values, (b) extremely competitive job market, and (c) mysterious and seemingly life-defining "tenure"
In my 3rd year (~2000), I almost left grad school. Silicon valley was booming and calling. I was stressed, not sure that I could do the work. And I saw the dysfunctional reward system up close and wanted no part of that.