An application of Aumann's agreeing-to-disagree result: certain kinds of war between rational countries are puzzling.
Since war involves destruction, better for one to surrender and bargain. But maybe each believes it'll get more by fighting? Suppose a country fights iff
1/3
it thinks it will win with prob > 0.5. In the notation below, let X = indicator that I win. Then war means it's common knowledge that Y>.5>Z.
That's impossible, but the proof below doesn't quite show it. Exercise: show there's no CK event E on which Y>Z.
War here is like speculative trade (cf. no-trade theorem): it can't happen due to different information ALONE, because by Aumann both of us can't rationally expect to win.
Thus, if war is zero-sum or worse, it entails irrationality or different priors.
3/3
(A darker possible explanation is that, taking only elites' interests into account, war actually generates "surplus" -- both would agree to it even if neither expected to win.)
One of my favorite things about Bob Wilson, co-winner of today's prize, is how gentle a giant he is, how modest yet understatedly charismatic and funny. This rare recording gives a sense.
"I'm here to today to argue that sequential equilibrium [his own invention with Kreps] -- which you said ... in 1982 completed the answer to the questions that Luce and Raiffa raised in 1957 -- well, MY stance was that that was a mild disaster!
2/5
"Sequential equilibrium turned out to have enormous flaws. And the revelation of those flaws has, I think, been actually opening up what the real challenge is for game theory in terms of establishing what its foundations are.
3/5
We look at a society where people update their opinions according to the _DeGroot model of updating_. It says you decide what to think tomorrow by taking a weighted average of what you and your friends think today.
2/
Despite its simplicity and strong assumptions, DeGroot's model has been a surprisingly helpful workhorse in networks.
We ask: suppose initial estimates are centered at the truth θ and conditionally independent.
Do we get a "wisdom of crowds" in the long run? More precisely...
3
The average course size that students experience is bigger than the average course, because by definition the big courses have more students experiencing them ("the class size paradox.")
2/
When you go to the gym and look around, you feel relatively bad because the very frequent gym-goers are oversampled in your looking around, whereas the never-gym-goers are not sampled at all and don't make you feel (relatively) better.
3/3
This is the assertion that "your friends are more popular than you are."
Why? Simplest way to see it: some people have no friends. But because they appear in nobody's friendship circles, they're not making anyone else feel unpopular.
1/
The selection effect that applies to these friendless (a.k.a. degree zero) people also applies to other people: the more friends you have, the likelier you are to be represented in people's friendship circles. So popular people are oversampled as friends. Hence the paradox.
2/
Still, what is the paradox exactly, as a quantitative statement?
Scott Feld, who coined the term and made the paradox famous, had one way of formalizing it. It isn't my favorite way, but it's a classic, and worth meeting first.
3/
A short thread on an obvious selection effect with some big consequences.
The social networks that are huge and very powerful now are the ones that grew the fastest. All else equal, these tend to be those with compelling products, but also another crucial thing:
1/
Being willing to make most trade-offs in favor of growth during a crucial period, which often was pretty long.
That process isn't pretty: it involves being willing to manipulate users and operate as many viral loops as possible, as long as they don't have a *growth* downside
2/
There's also a large, and maybe more important, effect on corporate culture: the people who grow most powerful and influential at the company during this period are the ones who were willing to give up a lot of other things for growth.
3/