An interesting study on the effects of prediction error on how people update their beliefs on topics.

Overview article: psypost.org/2021/11/psycho…

Research article: scholar.princeton.edu/sites/default/…

Relevance here to combatting misinformation.
The gist of the findings is that folks are more likely to change their mind on a topic when asked to make a prediction about some facts relevant to the topic and subsequently finding out their prediction was false.
Further, the magnitude of the prediction error is notable:

"we found that prediction error size linearly predicts rational belief update
and that making large prediction errors leads to larger belief updates than being
passively exposed to evidence"
As I understand, this isn't saying that exposing folks to prediction error will change their mind, more so that exposing them to prediction error actively, deliberately, and quickly has a greater chance to change minds than them passively finding the error on their own.
That lines up with the idea that one of the strongest techniques for getting folks to change their view on a topic is to get them to sincerely consider the alternatives. The surprise that comes along with prediction error is one way to do that.
So here, we're looking at a conversational technique to stimulate consideration of alternative views.
We know that folks who arrive at a conclusion absent logic are unlikely to change their mind based on logic. A big part of that is identity and the things that folks wrap up in it. It's hard to challenge folks based on things they consider part of their identity.
A lot of research in this area points not to tell people what to think, but to stimulate their thoughts in ways that connect other parts of their identity to alternative viewpoints. Of course, this gets used in positive and negative ways alike.
When you consider how you deliberately stimulate this kind of prediction error... here's an example.

Let's say you're discussing the merits of govt welfare programs with someone who is very against it and you want them to consider alternate perspectives.
You could ask something like, "What percentage of folks receiving SNAP (food stamp) benefits are receiving those payments fraudulently?"

Chances are, the person you are talking to will think it's quite high in this example.
Because fraud detection is imperfect the actual stat will vary, but estimates range from 3-7 percent. That is often a lot less than folks who are adamantly against such programs think absent real data. A lot of opportunity for prediction error here!
Not only that, this opens up some interesting conversations. "Wait, how do they know that?"

Many don't even realize that the govt or other groups know about or track welfare fraud in any meaningful way.
"Well, why don't they fix it?"

It's a trade-off, like most things. If you over-correct the system problems that allow the fraud, you might exclude people who legitimately need help. So, you accept some level of fraud to make sure you can help people that need it.
In reality, that's how a lot of government programs work. There is a trade-off between some level of allowable fraud while still allowing the program to serve those who need it.
We see this in business, too. Retail stores know some theft will happen. But to completely eliminate it would require some uncomfortable and draconian decisions that would send customers elsewhere. They accept some theft in exchange for a better customer experience.
When you think about the argument this way, it gives you another way to approach the person this scenario is set up around.

Start with this question:
"What percentage of SNAP payments being considered fraud would warrant scrapping the whole program?"
Now, the person has to consider what amount of fraud they think there is already along with their own subjective cut-off for the program being worth it to society.
"I think anything greater than 20% means it's not worth it."

Well, I've got some good news for you!

Now you've created an opportunity for a prediction error, valuable discussion, and a better understanding of the tradeoffs.
None of this is a panacea, it's just a technique that research identifies as helpful in combatting misinformation and changing minds. Of course, these things work best with people acting in good faith. So, more your family at Thanksgiving than random Facebook commenters.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chris Sanders 🍯

Chris Sanders 🍯 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chrissanders88

16 Nov
The most common action an analyst will take is performing a search. Usually in a tool like Security Onion, Splunk, Kibana, and so on. The second most common action an analyst will take is pivoting. That term gets used a lot, but what exactly does it mean? 1/
In the investigative context, analysts pivot when they perform a search in one evidence source, select a value from that search, and use it to perform another search in a different evidence source. 2/ Image
For example...
1. An analyst searches in flow data to see who communicated with a suspicious IP.
2. They get a result and identify a Src IP.
3. They search in PCAP data for the Src IP / Dst IP pair to examine the communication. 3/ Image
Read 20 tweets
12 Nov
As one of my last doctoral coursework presentations, I spent time talking to my colleagues about the ethical dilemmas surrounding offensive security tool release. The outsider input was fascinating. Here's a thread to share some of that... 1/
Now keep in mind, my colleagues here are primarily educators. K-12 and university teachers, administrators, educational researchers, and so on. A few industry-specific education people as well, but none from infosec like me. 2/
My goal was to present the issue, explain why it was an ethical dilemma, and collectively discuss ethical perspectives that could influence decision-making. I withheld any of my opinions to let them form their own but gave lots of examples of OSTs and their use. 3/
Read 27 tweets
11 Nov
Although I had met Alan, I didn't know him well. However, his signature hangs on my wall as part of the SANS Difference Makers award he had a hand in awarding me in 2018. 1/
From what I know of him, he was a big part of making sure this award existed because he believed that we should use technology to make people's lives better, and a lot of his life was committed to that idea. I think that's a sentiment most of us can get behind. 2/
When we think of people whose contributions shaped the state of education in computer security, Alan is likely in the top portion of that list. When you consider the transformative power of education on people's lives, it's easy to see how many people he impacted in some way. 3/
Read 4 tweets
10 Nov
I'm doing some planning work now for the courses we'll work on and release in 2022 at @NetworkDefense

Want to work with me to develop a course of your own to host on our platform? Now's the time to send in a proposal.

networkdefense.co/develop-a-cour…
It doesn't matter if you don't have a lot of teaching experience as long as you are well-spoken. I'll work with you and teach you principles of curriculum design and adult learning to help turn your expertise into effective learning content.
Here are some comments from a few of our course authors who I've worked with during this process so far. ImageImage
Read 4 tweets
10 Nov
I think one of the best 1-2 punches we've got going right now is our CyberChef course + our RegEx course. I consider both pretty necessary skills for analysts of multiple sorts (SOC, IR, and Malware RE).

networkdefense.co/courses/cyberc…

networkdefense.co/courses/regex/ ImageImage
CyberChef is maybe my most used tool in investigations these days other than the SIEM or a terminal. That course gives you a taste of regex but then the RegEx course makes you comfortable there. You also get a free copy of RegEx Buddy with that course.
You also get the strong 1-2 punch of Matt's Australian accent and Darrel's British accent 😍
Read 6 tweets
6 Nov
Some of the work I'm most proud of from my time at Mandiant was pioneering the building of investigative actions *into detection signatures* as they were written. This had profound impact across the detection and product teams, and made the tool so much more accessible.
We included two main components: a plain English version of the question that the analyst needed to answer, and the search syntax that would provide data to answer the question. It manifested in the UI as something I named "Guided Investigations".
This helped detection engineers write much better signatures because they had to think more deliberately about the consumer of the alerts (the analyst). It led to higher quality detection logic and clearer metadata, including rule descriptions.
Read 26 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!

:(