My Authors
Read all threads
This week in network epistemology we looked at bias and misinformation. This is obviously a huge topic, and we tried to look at two different aspects: the influence of industry on scientific research and the spread of rumors on social media.
The obvious way that industry might influence science: they simply "purchase" scientific results. The first two readings we looked at considered ways that science might be influenced more subtly without, necessarily, producing outright fraudulent studies.
The first paper we read was by Weatherall, @cailinmeister, and Bruner. They look specifically at the "tobacco strategy" where industry tries to amplify legitimate science that supports their conclusions.

arxiv.org/abs/1801.01239
They consider two broad strategies for industry influence. The first is "selective sharing." In this model policy makers read some of the science, but are also sent scientific results by lobbyists who cherry pick those studies which support the industry's preferred conclusion.
One real-life example they give is where the industry amplifies truly legitimate research on other causes of lung cancer with the ... definitely not legitimate ... aim of making it seem like cigarettes don't cause cancer.
This strategy will work for the industry under four conditions:

1. There is relatively little direct interaction between policy makers and science (*cough* *cough*)

2. The scientific problem is sufficiently hard
and

3. The scientific community tends to produce many, low powered(*) studies instead of a few high powered ones.

(For those unfamiliar, "power" is a term of art. Think about sample size as an approximation.)
4. The scientific community is relatively diverse in terms of what kinds of studies they produce. (Perhaps because information is limited -- people aren't sharing their results widely in the scientific community.)
This last point is very interesting to me, as it presents an important cost to what I've called "transient diversity." While transient diversity might help to make the scientific community better, it also makes it more susceptible to being misrepresented to policymakers.
The authors also consider a second strategy, where the industry funds a certain type of science in order to increase it's production. Industry cannot produce fraudulent data, but they can make other choices.
What choices would they make? Again, they would produce many low-powered studies in the hopes of generating enough misleading (but nonetheless not fraudulent) studies to use to further their selective sharing strategy.
The paper is exceptionally interesting (and there is more I'm leaving out). The class really focused on the ways that the authors have highlighted aspects of low-power studies and the "file drawer" problem that haven't really been pointed out.
It's not enough that papers get published in the scientific record. Even if a scientist doesn't "leave it in the file drawer," industry might be able to achieve the same. Recreating the file drawer problem at the policy level.
In the meta science literature, one proposal -- favored by a lot of open science folks -- is just "publish everything." But I think this paper suggests why this might be dangerous. Publishing everything means industry gets to cherry pick.
The second paper we read was this one on "industry selection" by Holman and Bruner. This is also a great paper which looks at how industry can co-opt otherwise well meaning scientists by leveraging legitimate methodological diversity.

journals.uchicago.edu/doi/abs/10.108…
Suppose scientists disagree about the best method to use to address a given problem, there is genuine uncertainty. Industry might just come in and fund those who tend to produce more "industry friendly" results.
Like in evolution by natural selection, this process will tend to produce an ecology with methods that tend to be biased toward industry, even though no individual scientist conceives of themselves as being biased. (So no one is acting in bad faith.)
They show, through a model, how effective this strategy can be. While model is interesting, I actually think the idea is far more general than the details of the model and could apply in lots of domains where there is methodological diversity -- i.e. all of science.
Both papers together show that diversity in scientific method has a dark side -- it can be used by industry to influence policy makers and can create an environment where industry "selects" for biased science.
(@psmaldino also has a related paper where they look at how the natural selection analogy can be applied to science as well. Interested readers should check that out royalsocietypublishing.org/doi/full/10.10… )
Finally, we switched gears and looked at the spread of outright false information on this particular social network. We read a paper by Arif and colleagues (including @katestarbird).

dl.acm.org/doi/abs/10.114…
They look at how rumors spread on twitter surrounding the Sydney hostage crisis in 2014. They considered the spread of three rumors (one true, two false) and how various tweets either started, sustained, or helped to quell the rumors.
Most interestingly, they showed how despite enormous data it's incredibly difficult to pinpoint how people will react to rumors. Some people with relatively small followers were able to start rumors, while others with large followings didn't seem to have much influence.
This represents one way that rumors may differ from other forms of "transmission" like disease transmission. Here it doesn't seem like a tweets were equally likely to "catch on." Some would "snowball" and others "fizzle."
Overall, this was a really interesting week that raised a lot more questions. We're going to revisit the relationship between rumors and disease later in the class, but it was good to start thinking about those issues now.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Kevin J.S. Zollman

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!