My Authors
Read all threads
For week 2 in my Network Epistemology class, we looked at a classic economics model: information cascades.

The basic idea of the info. cascade model is to show that even rational individuals can sometimes engage in apparently irrational collective behavior.
The basic outline of the mode: people form a independent judgments about some unknown fact, like "will it rain tomorrow?" They then engage in a "roll call" vote where each one votes in sequence. Every person can see the votes of those in front of them in the order.
Because they see how others vote, they can update their opinion about the chance of rain. If I don't think it's going to rain, but I see three people with umbrellas, I might change my mind. And so might others behind me.
And this is the basic fact of information cascade models: people can behave like they are conforming, when in reality they are just learning from each other.

If the first few people are wrong they can start a "cascade" of false votes that render the entire group wrong.
We started by reading one of the two classic papers on the topic by Banerjee. Banerjee presents a version of the model I just described (although his is a little weird in some respects).

watermark.silverchair.com/107-3-797.pdf?…
But he also discusses a critical point about these models: that they are an example of a social dilemma (like the Prisoner's dilemma). Each individual is acting so as to maximize their individual chance of being right. But collectively, they all end up worse off as a result.
Second, we read a paper by List and Petit. They explore some of those same issues about "free riding" on the information provided by others. They also show how the case compares with the Condorcet Jury Theorem that my class read last week.

openresearch-repository.anu.edu.au/bitstream/1885…
The comparison is interesting: if everyone was ignorant of how others are voting the group would do better. So this is a setting where limiting information would improve the performance of the group.
Third, we read this paper by Hung and Plott which presents an experiment on information cascades. First, they confirm that humans, when placed in the same setting, behave largely like the mathematical model.

jstor.org/stable/pdf/267…
But, they also explore another interesting setting. In the classic model, each individual wants to be right -- I'm striving for *my* vote to be correct. But what if I was only incentivized for the group to be correct?
In that setting, the mathematical model would predict that the information cascade *might* go away. And that's what Hung and Plott find with humans in the lab. When incentivized to care about the group, the amount of cascade-like behavior goes way down.
Lastly we read this really cool paper by my colleague Aislinn Bohren. She looks at the possibility that some people might be uninformed -- they might not see the actions of others in front of them.

static1.squarespace.com/static/5a1b26c…
The most interesting conclusion of her model is that the group does better when the individuals think that others are *rarely* uninformed -- even if that belief is false.
For a philosopher, this is a really interesting conclusion because it's a case where a false belief makes a group of rational agents better at converging to the truth.

It sounds almost (but not quite) paradoxical, and that makes it a very interesting case study.
Lots has been said about the information cascade model. I think it's neat to think about, especially during primary season in the US where states vote in sequence and a lot of attention is paid to the early states (like Iowa TODAY).
But, I also think the historical narrative is interesting. Before the information cascade models came out, economic bubbles and crashes seemed like a violation of the classic "rational actor" model. What these models show is that they are not.
Information cascades are a very early example of how rationality can fail to "scale up" -- rational individuals can make up groups that look pretty irrational. And that's a very important fact to know.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Kevin J.S. Zollman

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!