He's best known for the Blackwell information ordering, the way to formalize when some signals give you more information than other signals.
A thread on Blackwell's lovely theorem and a simple proof you might not have seen.
1/
Blackwell was interested in how a rational decision-maker uses information to make decisions, in a very general sense. Here's a standard formalization of a single-agent decision and an information structure.
2/
One way to formalize that one info structure, φ, dominates another, φ', is that ANY decision-maker, no matter what their actions A and payoffs u, prefers to have the better information structure.
While φ seems clearly better, is it definitely MORE information?
3/
Blackwell found out a way to say that it is. That's what his theorem is about. Most of us, if we learned it, remember some possibly confusing stuff about matrices. This is a distraction: here I discuss a lovely proof due to de Oliveira that distills everything to its essence.
4/
We need a little notation and setup to describe Blackwell's discovery: that the worse info structure is always a *garbling* of a better one.
Let's start by defining some notation for the agent's strategy, which is an instance of a stochastic map -- an idea we'll be using a lot.
Stochastic maps are nice animals. You can take compositions of them and they behave as you would expect.
Here I just formalize the idea that you can naturally extend an 𝛼 to a map defined on all of Δ(X). And that makes it easy to compose it with other stochastic maps.
Okay! That was really all the groundwork we needed.
Now we can define Blackwell's OTHER notion of what it means for φ to dominate φ'.
It's simpler: it just says that if you have φ you can cook up φ' without getting any other information.
7/
Blackwell's theorem is that these two definitions (the "any decision-maker" and the "garbling" one) actually give you the same partial ordering of information structures.
"Everyone likes φ better" is equivalent to "you get φ' from φ by running it through a garbling machine 𝛾."
To state and prove the theorem, we need one more definition, which is the set of all things you can do with an info structure φ.
The set 𝓓(φ) just describes all distributions of behavior you could achieve (conditional on the state ω) by using some strategy.
Now we can state the theorem. We've discussed (1) and (3) already. Point (2) is an important device for linking them, and says that anything you can achieve with the information structure φ', you can achieve with φ.
10/
de Oliveira's insight is that, once you cast things in these terms, the proof is three trivialities and one application of a separation theorem.
(1) ⟹ (2). If φ' garbles φ and you HAVE φ, then just do the garbling yourself and get the same distribution.
(2) ⟹ (1). On the other hand, if φ can achieve whatever φ' can, it can achieve "drawing according to φ'(ω)," which makes you the garbling
(2) ⟹ (3) says if 𝓓(φ) contains 𝓓(φ') then you can do at least as well knowing φ': the easiest step.
Note that the agent's payoff depends only on the conditional distribution behavior given the state. Since all distributions in 𝓓(φ') are available w/ φ, agent can't do worse.
(3) ⟹ (2) is the step that's not unwrapping definitions.
Suppose (2) were false: then you could get some distribution 𝐝' with φ' that you can't get with φ. The set 𝓓(φ) of ones you can get with φ is convex and compact, so .... separation theorem! Separate 𝐝' from it.
14/
If we state what "separation" means in symbols, it gives us (*) below. But that tells us exactly how to cook up a utility function so that any distribution in 𝓓(φ), one of those achievable with φ, does worse than our 𝐝'. That's exactly what (3) rules out.
15/
That's it!
Happy birthday David Blackwell, and thanks Henrique de Oliveira. Though I am the world's biggest fan of Markov matrices, there's no need to use them for Blackwell orderings once you know this way of looking at things, which gets at the heart of the matter.
16/16
typo! that red arrow label should just be 𝛾
PS/ Tagging in @smorgasborb, who I didn't know was on Twitter and whose fault this all is.
A few typos above that I hope didn't interfere too much w/ exposition of his argument: In 6, the red label was wrong - fixed here. In 13, the first φ' should be φ.
🙏
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This terribly misguided paper is making the rounds.
This thread is to make it common knowledge what is wrong with it.
The basic thing: all modern economic theory allows for a gap between individual maximization and efficiency, whatever you mean exactly by each of these.
The first welfare theorem (individual optimization implies social efficiency) breaks down in the presence of frictions -
e.g., incomplete markets, asymmetric information, externalities, and market power.
Most economics today is about these frictions.
2/
Now, the paper has some halfhearted recognition of this, but says, effectively
"Well, you know, there is some meta-stage in which institutions are chosen, and economics assumes that this choice will be made to kill all frictions except the efficient ones."
3/
a few notes on it from an economist studying network theory
The striking thing about César's hit 2009 paper on economic complexity is that it doesn't mention eigen-anything and seems surprisingly disengaged from network theory.
The economic complexity index that Hidalgo and Hausman propose in "The building blocks of economic complexity" is a very close variant of Kleinberg's very famous 1999 HITS algorithm.
It's not clear whether they're aware of this connection, but in any case
2/
economists writing about networks in 2009, such as Jackson, Acemoglu, myself, and many others would have probably written the paper differently --
with a clearer consciousness to our big debt to the prior study of eigenthings as centrality measures!
I don't care at all about homework being done with AI since most of the grade is exams, so this takes out the "cheating" concern.
Students seem motivated to learn and understand, which makes the class very similar to before despite availability of an answer oracle.
2/
It's possible that (A) all the skills I'm trying to teach will be automated, not just the problem sets AND (B) nobody will need to know them and (C) nobody will want to know them.
Notice: A doesn't imply B and B doesn't imply C.
3/
A survey of what standard models of production and trade are missing, and how network theory can illuminate fragilities like the ones unfolding right now, where market expectations seem to fall off a cliff.