Ben Golub Profile picture
11 Oct, 10 tweets, 4 min read
Many thanks for sharing this, @sinanaral and @rodrikdani.

Paper here bengolub.net/papers/naivele…, and tl;dr version here bengolub.net/papers/naivele…

and an even shorter version below.

1/
We look at a society where people update their opinions according to the _DeGroot model of updating_. It says you decide what to think tomorrow by taking a weighted average of what you and your friends think today.

2/
Despite its simplicity and strong assumptions, DeGroot's model has been a surprisingly helpful workhorse in networks.

We ask: suppose initial estimates are centered at the truth θ and conditionally independent.

Do we get a "wisdom of crowds" in the long run? More precisely...
3
... given that initial estimates b(0) are as described above, will opinions after enough rounds of updating get very close to the truth?

We give a necessary and sufficient condition for this to happen in terms of the network that mediates agents' updating.

4/
To give the characterization, we define a measure of how influential people are in the network. Here the network is given by the matrix T defining the updating rule.

The measure of influence we need is a standard one - eigenvector centrality.

5/
Our result says that we will see "the wisdom of crowds"
-- convergence to the truth in the long run -- if we have a network where nobody has too high a centrality.

Here's an example of a network like this: most nodes are pretty much equal, and a few have low centralities.

6/
You can prove that the expected squared error of anyone's long-run estimate will be ≤ the maximum of the centralities.

If centralities get small, estimates get precise.

In the above example, all the nodes in the core have similar centralities, so nobody's can be too big.

7/
On the other hand, in a network like this, the center gets a lot of centrality no matter how many peripheral nodes there are, and so we won't get wisdom.

The reason is intuitive: idiosyncratic errors in the initial estimate of the central node sway everyone and persist.

8/
That is the content of the characterization in the title tweet.

My guess is that the real contribution of the paper was to show that network structure can play a central (no pun intended) and nontrivial role in answering whether good long-run learning happens.

9/
There are many limitations both in the model of learning used here and the answer that we give. Fortunately a thriving literature has made a lot of progress on these.

Here is a survey covering some of that work. 10/10

bengolub.net/papers/survey.…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ben Golub

Ben Golub Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ben_golub

12 Oct
One of my favorite things about Bob Wilson, co-winner of today's prize, is how gentle a giant he is, how modest yet understatedly charismatic and funny. This rare recording gives a sense.

1/5

found this thanks to @mariannollar

"I'm here to today to argue that sequential equilibrium [his own invention with Kreps] -- which you said ... in 1982 completed the answer to the questions that Luce and Raiffa raised in 1957 -- well, MY stance was that that was a mild disaster!

2/5 Image
"Sequential equilibrium turned out to have enormous flaws. And the revelation of those flaws has, I think, been actually opening up what the real challenge is for game theory in terms of establishing what its foundations are.

3/5
Read 6 tweets
4 Oct
Scott Feld who is responsible for an early formalization of the paradox, relates it to other great examples:

You think the subway is more crowded than it is, because most people aren't there to see it when it's not crowded.

1/
The average course size that students experience is bigger than the average course, because by definition the big courses have more students experiencing them ("the class size paradox.")

2/
When you go to the gym and look around, you feel relatively bad because the very frequent gym-goers are oversampled in your looking around, whereas the never-gym-goers are not sampled at all and don't make you feel (relatively) better.

3/3
Read 4 tweets
4 Oct
The friendship paradox!

This is the assertion that "your friends are more popular than you are."

Why? Simplest way to see it: some people have no friends. But because they appear in nobody's friendship circles, they're not making anyone else feel unpopular.

1/
The selection effect that applies to these friendless (a.k.a. degree zero) people also applies to other people: the more friends you have, the likelier you are to be represented in people's friendship circles. So popular people are oversampled as friends. Hence the paradox.

2/
Still, what is the paradox exactly, as a quantitative statement?

Scott Feld, who coined the term and made the paradox famous, had one way of formalizing it. It isn't my favorite way, but it's a classic, and worth meeting first.

3/
Read 10 tweets
3 Oct
A short thread on an obvious selection effect with some big consequences.

The social networks that are huge and very powerful now are the ones that grew the fastest. All else equal, these tend to be those with compelling products, but also another crucial thing:

1/
Being willing to make most trade-offs in favor of growth during a crucial period, which often was pretty long.

That process isn't pretty: it involves being willing to manipulate users and operate as many viral loops as possible, as long as they don't have a *growth* downside

2/
There's also a large, and maybe more important, effect on corporate culture: the people who grow most powerful and influential at the company during this period are the ones who were willing to give up a lot of other things for growth.

3/
Read 7 tweets
8 Sep
Haven't been this proud of an acknowledgment in a while! (see next tweet) Image
Image
Read 4 tweets
20 Aug
An agent prefers to do A but does B instead because it's his duty.

Ordinary revealed preference theory says he actually preferred B (whatever he might say), if he had any consistent preference at all.

Sarah Ridout has a nice paper giving a more helpful account of what happened. Image
The paper is here:
arxiv.org/abs/2003.06844

This little thread is inspired by @itaisher's example here, which fits Sarah's paper perfectly

Here's what she does: "I consider decision-making constrained by considerations of morality, rationality, or other virtues. The decision maker has a true preference over outcomes, but feels compelled to choose among outcomes that are top-ranked" by a "virtue/duty" preference.

3/
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!