Looks like "tribalism vs maximalism" is trending again on cryptotwitter, drawing all kinds of wildly divergent but mostly wrong conclusions. So here's a little bit of game theory about it.
Starting point: Imagine you're able to observe a bunch of small-town teenagers every weekend, when they meet up at a diner to decide which of the two available movie to watch at the (small-town!) movie theater. Sometimes they go as a group, sometimes they split into two groups.
In those cases where the group splits up, you observe that the subgroups (the "tribes") are often similar but not always the same. Some kids stay together, some switch back and forth. Some seem to avoid each other. There seems to be a social substructure that shapes these tribes.
Out of this series of observations you can extract (at least) three research questions: 1. Are these observed choices the result of pure preference over the movies on offer, or do the kids influence each other in their decisions?
2. If so, what is the structure of the influence network and how does it shape outcomes? 3. In that case, can we say anything about whether the group as a whole makes an "optimal" decision under some definition of optimality.
Turns out a bunch of researchers have already looked into this question, and in the process have made a number of simplifying assumptions that linger on to this day.
There's the technology adoption literature with its "network effects" and "increasing returns" branches. There's the strategic interaction & learning/coordination games literature. There's the financial herding literature. There's the coalition formation literature. Etc.
Let's formalize. Our choice space is given as si = {−1, +1} for symmetry reasons. {0, 1} works too. The individual utility is something like:
ui = bi si + ∑j wij si sj, with i = you, j = everyone else, and wij = wji for simplicity. Your bond with everyone else is symmetric.
bi is your bias or your type. It expresses how much you prefer one or the other movie before you consider anyone else's opinion. wij is the strength (weight) your interaction effect with person j. If it's positive, you like to do things together, If it's negative, not so much.
This is a pretty straightforward model if you ignore prices, multiple choices, abstention, discounting, and asymmetric relationships, all of which you can add in later. It also has the quality that it's good at highlighting the assumptions the various existing models make.
Many models simply set bi to zero (or a constant), which leads to homogeneous preferences. A homogeneous population will converge to a single option under positive interaction effects, unsurprisingly. Call this "clone maximalism".
The Katz/Shapiro (and Metcalfe) type "network effects" models assume wij is >0 and the same for all. The "network" is simply a group and you choose the one that's expected to be bigger, maybe tempered by your own bias. This will also converge. Call this "Clone group maximalism".
The "increasing returns" models are similar to network effects models, but assume sequential entry and remove foresight. You pick the group that's bigger now (wij = 0 if j hasn't picked yet) and then beg others to join your group. Call this "crypto maximalism".
Finally, the coordination game literature assumes that wij is big enough that even with only two players, your optimal choices are doing something together, even though you might be conflicted over what the "something" should be (that's the famous "battle of the sexes").
Most of these models have been used to conclude that positive interaction effects ("influence") inevitably lead to single-choice outcomes. Typically they're also backed by various just-so stories.
Problem is that in all these models, assumptions drive outcome. Assumptions about preference homogeneity, influence homogeneity, influence being stronger than preference, sequence of decision making, etc. And in all the just-so stories, single-choice is a result of framing.
For starters, putting bi and wij into the same utility function is iffy, but we can survive that. Then, there is very little empirical backing that wij even exists. But then again, even if we agree it exists based on circumstantial evidence, wij is rarely as big as bi.
Just imagine, we could live in a world with a universal world clock if people were just ok with seeing the sun rise at 3am or 9pm. In a world with global, near-instantaneous communication and travel the cost savings would be enormous.
Indeed, the model gets much more traction if we assume influence to be small compared to preference, averaged out over the population. This is also and especially true when the influence factor is negative ("tragedy of the commons").
So largely when cryptopeeps make strong predictions about where the market is going based on "network effects" or "coordination" or "Schelling" something something, you'll learn more about their mental models than about actual outcomes.
This is actually true for economic models in general. The policy implication might be the attention-getter, but it's largely useless in itself. The more worthwhile investigation is to have a closer look at the conditions under which it comes to pass.
Also, for the massive effect the fundamental question "Is every economic agent an island, entire of itself?" has on economic understanding (the Axiom of Revealed Preference collapses if not), both the theoretical and the empirical body of work is still extremely thin.
Missing some Tweet in this thread?
You can try to force a refresh.

# Like this thread? Get email updates or save it to PDF!

###### Subscribe to oliver beige

Get real-time email alerts when new unrolls are available from this author!

###### This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

# Try unrolling a thread yourself!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" `@threadreaderapp unroll`