, 24 tweets, 7 min read Read on Twitter
I have seen a lot of snide/bad takes on the findings of this study, so I wanted to weigh in. nature.com/articles/s4156…
I'm not sure if the SharedIt™️ link is reliable, so here's another link to the study entitled "Extreme opponents of genetically modified foods know the least but think they know the most". I wonder how many people read past the catchy title... nature.com/articles/s4156…
They tried a few different methods to look at this, but the source of the title was really the first study, which asked people what they thought of genetically modified foods, asked them to self-report how informed they are about GM foods, and asked 15 general science questions.
They asked the questions in that order, but let's talk about them in reverse. First, the 15 questions were *very* basic (view them here: osf.io/9gztf/). Interestingly, these true or false questions were judged on a scale.
True or false questions, by definition are not on a scale. The first question, is the center of the earth hot, is *definitely true*. However, perhaps to prevent the existential crisis of choosing a side, they allowed participants to hedge their bets for fewer points.
So, if you said the earth's is definitely hot, you get 3 points, probably hot is 2 points, maybe hot is 1 point. If you admit you don't know, zero points for honesty. If you speculate the center of the earth is cold, how confidently you speculate matters as definitely cold is -3!
Built into their point system is confidence in their scientific knowledge. This seems relevant when you are going to later correlate that "objective" point system with self-reported scientific literacy.... Let's think about just a couple examples.
What if someone does not feel like they know the science? They may report that their GM knowledge is lower, then proceed to report only "probably" answers. If this person gets every question right, they get a score of 30! If they get 9 the wrong, they get a -6.
What if someone is super confident? They may report high GM knowledge, then proceed to pick only definite answers. If they get every question right they get a 45! If they get 9 wrong, they get an -9.
These hypothetical survey participants both answered the same number of questions correctly, but their Objective Knowledge scores could vary up to 15 points based on how willing they were to pick "definitely" over "probably".
How much does this matter? Well, a lot considering Figure 1. The differences that brought attention to this article are... *squints* less than 10 objective knowledge points from the least opposed to the most opposed. That is basically two wrong questions!
Sure, those that have strong opposition know less, but two more wrong questions across all of science (only five questions were on genetics) doesn't seem really that different.
The authors first describe the trend of the left graph in figure 1 as trending downward as opposition increases, as this is sort of important for their conclusions. It seems pretty clear the graph levels out at an opposition level of 4 (neutral to GM food).
This happens to be the lowest point in the graph, but for some reason, the title "Neutral opponents of genetically modified foods know the least" doesn't have the same ring to it.
The error bars for points from neutral to most opposed suggest they're all statistically the same, so it's not technically wrong to say the most opposed know the least, at best it's "maybe true".
This wordplay is not surprising, as the paper is full of irresponsible interpretation of statistics. A few gems are "close to significance in France (P = 0.15)", "marginally significant" and "the non-significant effects in Study 2 are probably false negatives".
Something becomes marginally significant if it challenges these authors' narrative, but if P=0.15 and it fits their narrative, that's "close". For such broad conclusions, the foundation seems less than stable with such wording.
Here we are at the last point of this thread: what does self reporting your understanding of GM foods really represent? Reading the prompt, I would expect participants to self-report a high understanding if 1) know a lot about GM foods and 2) are confident in that knowledge.
Their example for a 7 (high understanding) is a 10 line description and their example for a 4 is only 4 lines. It's asking the participant, could you confidently write 10 detailed lines about this topic? Then they call that self-assessed knowledge.
You should score high if you are truly knowledgable about the topics, but you may also score high if you consume a lot of misinformation on the topic. That is, you have seen the misinformation a lot, so you trust it, and you know enough to write down 10 misinformed lines.
That is why I think that any article discussing these findings without the word "misinformation" appearing anywhere is a problem. If people are misinformed about GM foods, which many people are, then of course they'll report they are knowledgeable and score poorly on the science.
It's a huge error to say, people who intensely oppose GM foods know the least. I bet those people had more to say about GM foods than anyone else (knew the most), but much of what they knew was probably wrong (knew the least).
Misinformation plays a (I would argue major) role in proper interpretation of this study, but the discussion lacked any such nuance. Instead, it attempts to resurrect a knowledge deficit model for this topic at the expense of the opposition to GM foods that "know least".
Science journalists should be careful in promoting this idea as settled or mainstream considering all of the caveats with this study, and GM advocates will not open any minds by happily sharing this article uncritically.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Brian Lovett
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!