My Authors
Read all threads
Chapter 4 of #ScienceFictions by @StuartJRitchie is on Bias. This is a subject often discussed by academics but mainly in regards to other people’s research. Personally, I think it’s hard to overstate how important this topic is but let’s see what the chapter offers.
The chapter opens with a brief discussion of the 19th Century American scientist Samuel Merton and his efforts to demonstrate that the moral and mental faculties of different races could be traced to their skull size. Morton’s measurements were later harshly critiqued by Gould.
Gould highlighted how Merton’s measurements appeared to be strongly contaminated by his racial bias, causing systemic measurement. This is a topic returned to later in the chapter but here some details for how ideology could influence measurement are provided.
The chapter however is not focused primarily on the well trodden attention grabbing topics of racial or political biases but on less dramatic but more prevalent biases that impact even good researchers who strive to keep their research impartial.
In what could be the mission statement of the book, Stuart emphasizes how the core values of science somewhat ironically require that we take very seriously the subjectivity of researchers (and journal editors) and how this impacts the research literature.
The chapter next gets into the issue of publication bias & how a focus by journals (and researchers) on only reporting positive results has skewed the research literature. To help readers understand the criteria used to determine when a result is positive p-values are discussed.
Defining p-values and their correct usage is a notoriously difficult subject but from my reading at least Stuart does an excellent job. Providing a neat primer with copious Scottish references (such as using taps-aff.co.uk to illustrate arbitrary thresholds).
Alongside statistical significance we also get coverage of effect sizes & why they are equally important. The chapter actually serves as a pretty good intro to social science stats. It devotes time to explain even relatively complex things like p-curve analysis & funnel plots.
An important point, repeated throughout the chapter, is how confirmation bias- a tendency to favor results that confirm our expectations- lies at the heart of our skewed research literature. People rarely consider their response to negative counter factual outcomes.
There are plenty of illustrations given to support the arguments made, with some meta-science studies showing depressing reductions in just how many null results disappear. I know from personal experience too that this practice remains very common. Though things are changing...
The next section of the chapter covers issues of data manipulation or maybe over analysis. Here, we are not discussing data fraud but rather widespread ‘questionable research practices’ (QRPs) which enable researchers to extract the results they want from their messy data.
Chief amongst these techniques is ‘p-hacking’ which is a process of running multiple tests, subdividing samples, and using other analytical chicanery to achieve a ‘significant’ result. Also, important is HARKing - ‘hypothesizing after the results are known’.
This is akin to predicting the winning lottery numbers after you see the results. It only looks impressive if you an make it seem like you predicted the outcome in advance.
These concepts are introduced clearly and illustrated with relevant examples, Brian Wansink’s now infamous food research is discussed and we also return to the original Power Posing study. vox.com/science-and-he…
An important point, that I’m glad Stuart covers, was the response of one of the authors of the original Power Posing study, Dana Carvey. She was never as high profile as Amy Cuddy but in 2016 she released a remarkably honest statement detailing her lack of faith in the effect...
... and perhaps more impressively an accounting of all the QRPs used to achieve the original result, which has proven non replicable. The positive note here is the response this elicited from the research community: rather than criticism and censure she received universal praise.
I teach a lot of these concepts and cases to undergraduate students. For ex., I usually have them read the original Power Posing paper, watch the Ted talk, and discuss. Then the following week read Carney’s statement, Ranehill’s attempted replication, and reassess the paper.
You can actually go further into Amy Cuddy’s response and competing p-curves/meta-analyses but the point is that it’s very refreshing to see these kind of details discussed in what is ostensibly a popular science book for a non specialist audience.
A message I would like many, especially people on Twitter(!), to take is that a) Data does not interpret itself it, there is lots of subjectivity in describing objective data & b) bias does not require nefarious motives. It can, & probably usually does, come from a genuine place.
Another useful concept that comes from a meta-science study comparing results reported initially in thesis to final publications in journals, describes a ‘chrysalis effect’ whereby ugly non significant results disappear. An important point to consider when reviewing research.
There is tonnes of other great detail in this chapter, including the perils of model overfitting, but I’ll end up with an insanely long thread so I’ll try to round things off. The later sections of the ch. discuss how commitment to specific theoretical models can be restrictive.
Stuart also returns to the topic of ideological/political bias, detailing the controversy surrounding stereotype threat and how it may relate to the overrepresentation of liberals amongst psychologists. This is not just a HdX academy missive however as he also discusses...
...issue of sexism and how it can impact surprising things like animal studies, where they avoid female mice models due to the perceived confounding complexity of female hormonal cycles.
Ultimately, while recognizing the importance of including underrepresented perspectives, Stuart argues in favor of scientists retaining the ideals of objectivity and striving to minimize the influence that stem from our inevitable biases.
The chapter ends by returning to Gould’s criticism of Merton’s skull measurement, describing how Gould’s criticisms were in turn critiqued by later researchers who suggested ‘ironically, Gould’s own analysis of Morton is likely the stronger example of a bias influencing results’.
However, in a microcosm of how academia works, these third order criticisms were not universally accepted and were themselves inject to criticism. A final twist coming from the discovery of additional measurements by Merton in 2018 that seem to contradict some of Gould’s claims.
Stuart’s point here is not to rehabilitate Merton’s study, he accurately describes it as almost entirely meaningless even if we accept the dubious premises, since the skulls were not taken from representative samples.
The value is rather in highlighting the need for consistent critical skepticism and for this to be applied to the debunked and debunkers alike.
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Chris Kavanagh

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!