Ben Golub 🇺🇦 Profile picture
econ prof @NorthwesternU visiting Stanford 23-24 social and economic networks from Ukraine past: @Harvard | @Stanford'12 | @Caltech '07
Chen Wang Profile picture Ajinkya Keskar Profile picture Democracy’N’Peace Profile picture 4 subscribed
Jan 28 7 tweets 2 min read
The notion that amazing papers should not get rejected is an odd one.

Any genuinely important idea is more likely to be strongly disliked. (Some reasons below in a short thread.)

To publish important work, editors have to be bold and overrule some negative experts. Non-exhaustive list of reasons

1. The first technical work in a new paradigm is often crude and simple relative to the sophisticated and elaborate papers written late in a paradigm, when methods are being polished by a large community of experts in those methods.
Nov 23, 2023 11 tweets 2 min read
I generally recommend

1. Constructing an n-by-m matrix whose rows are people and columns are issues (or dimensions of issues).

2. Finding the largest few and smallest few singular values.

3. Looking at the corresponding singular vectors in issue space.

(cont.)

1/ The top few singular vectors in issue space will tell you about "bundles" of issues along which there are considerable distances in the group.

(If these have high singular values, that corresponds to those differences explaining a lot of the group's variation in opinions.)
Nov 23, 2023 7 tweets 2 min read
Talking to GPT4 about the Sylvester-Gallai Theorem and formalizing it

1/ Image 2/ Image
Oct 22, 2023 13 tweets 3 min read
Some new progress in math makes me hopeful about finally making progress on a big open question about opinion dynamics in social networks.

The question is: in simple models where people update opinions by averaging in friends' opinions, how long can polarization persist?

1/
In a 2012 QJE paper, Matt Jackson and I

(i) studied "time to consensus" in such learning by adapting the standard EIGENVALUE analysis of convergence times for reversible Markov matrices

(ii) showed how to approximate the answer knowing only "GROUP-level" linking data.

2/


Image
Jun 17, 2023 10 tweets 4 min read
For those who (like me) were interested in the "GPT can ace MIT" paper,

Here's a great short writeup by three MIT EECS seniors explaining the many things wrong with analysis.

dub.sh/gptsucksatmit A few quick notes.

Chowdhuri, Deshmukh, and Koplow (from now on CDK) point out some things about the methods that would probably be surprising to those who excitedly retweeted the flashiest claims.

First, GPT-4 was often fed the same problems that it was asked to solve. Image
Jun 15, 2023 4 tweets 1 min read
What our journals would be like if excellent novelists edited our papers Among active economists, who writes like this? Ed Glaeser a little bit. Who else? Image
Jun 14, 2023 11 tweets 4 min read
Do individuals' ideas matter, or is the evolution of social opinion all determined by larger social forces?

Some recent threads by @DAcemogluMIT inspired me to write down one thing we know about this from research in network theory.

A short 🧵

1/
@DAcemogluMIT The question at the start is exactly what @JacksonmMatt and I answered in this paper, though that's not how it's stated in the abstract.

In the model, people get initial opinions from a distribution and talk in a network.

bengolub.net/wpcontent/uplo… Image
Jun 3, 2023 22 tweets 7 min read
🧵 about a use of ChatGPT with two plugins -- Wolfram and AskYourPDF -- to write a solution to a fairly challenging advanced undergraduate economics/game theory problem.

This is my first try with these tools - I thought it would be interesting to share. The topic of the problem is consumer markets with externalities, as taught in this beautiful chapter of an Easley and Kleinberg textbook.

The first challenge is how to get all this information into ChatGPT's head?

cs.cornell.edu/home/kleinber/… Image
Apr 16, 2023 21 tweets 4 min read
Had an interesting conversation with an economist friend touching on AI alignment, Derek Parfit, and implementation theory. Brief notes:

A basic fear about AGI concerns the unintended consequences of following instructions.

1/
The core problem is often framed as: "EVEN IF the AI wants to behave well, it's hard to convey all our values to it."

It might violate some of our very important preferences in trying to follow instructions.

E.g. it might kill its master in trying to make paperclips.

2/
Apr 5, 2023 6 tweets 2 min read
The Law of Iterated Expectations is secretly about eigenvectors.

Let q(w) be your prior of state w;
Let P(w,w') be P[w' | Y(w)]: your probability of state w' when w is the state and Y is your info.

Then the LIE is <=>

q P = q

I.e., q is an eigenvector of P w/ eigenvalue 1 This banger due to Dov Samet.
Mar 30, 2023 7 tweets 2 min read
A way GPT-4 changed my teaching:

In an advanced undergrad elective, I used to grade based on problem sets only. This required obfuscating famous exercises to make cheating via Google harder.

AI can often undo obfuscation, so I don't bother and give an exam.

1/
This is *not* because it's hard to obfuscate so that GPT-4 gives bad help: e.g., ask it to prove that in a loopless undirected graph on at least 3 vertices, at least 3 must have the same degree.

But it's made obfuscation costly/uncertain enough that it's not worth it for me.

2/
Mar 29, 2023 11 tweets 3 min read
Reliably amazes me:

Leontief's economics -- about how shocks propagate through the network of firms and industries trading -- is completely outside Ph.D. economists' canon.

This is both a symptom and a cause of economics losing good physics intuition.

Old-person thread

1/
The proximate cause of what happened is pretty clear:

Leontief's analysis made very strong assumptions about production (in fact his production function is the first/main reason most of us see his name).

Micro and macro came to focus on more general & "harder" methods.

2/
Nov 17, 2022 5 tweets 1 min read
The US News Rankings could be replaced by a ranking implied by a statistical revealed-preference model estimated on data about the choices of cross-admitted students. Something in this direction was done in a 2004 working paper by Avery, Glickman, Hoxby & Metrick

There have probably been other efforts since then?

nber.org/papers/w10803
Nov 16, 2022 5 tweets 2 min read
As we speak, medical journals are having an ugly wrestling match over who gets to publish this Oh never mind, I thought it was a standalone blog post but it is actually (I am serious) reporting on a paper that has already been published in the ... New England Journal of Medicine.
Nov 15, 2022 16 tweets 5 min read
Networks (real and virtual) let people keep up with news by learning from others.

Most social learning theories don't model the world changing.

Modeling it surprisingly makes rational learning rules simpler: people learn by putting (unchanging) weights on friends' opinions.

1/ In the model, the world is changing, but according to a nice stochastic process: the new state is obtained by adding a little innovation to the old.

People see what neighboring people did last period (or over a longer time) and also a private signal of the current state.

2/
Nov 13, 2022 4 tweets 1 min read
incredible performance art, troll Oscar winner, 10/10, no notes Image Details:

Oct 18, 2022 12 tweets 2 min read
Why are basic decision theory and game theory fertile ground for pseudointellectuals and cranks?

Part of the reason is that it's easy to mistake these topics as prescribing doctrines for behavior, though that is not what they do at all.

1/
Consider a statement like "(D,D) is the only outcome consistent with individual optimization in the prisoners' dilemma."

This is counterintuitive to very many students. Good students come to understand what game theorists actually mean when they say it.

2/
Oct 13, 2022 21 tweets 4 min read
Something that's been lurking in the background of the Law of Iterated Expectations War:

A big breakthrough of 20th c. probability theory was to DEFINE conditional expectation (and conditional probability) as something satisfying a certain property, much stronger than the LIE. This approach seems unnecessary when one first encounters it, since we have ways of EXPLICITLY defining conditional probabilities.

But as is often the case, defining objects implicitly via "deep properties" turned out to have a big payoff for the upfront cost in abstraction.
Oct 10, 2022 4 tweets 1 min read
Bernanke, Diamond, Dybvig long and variable lags
Aug 29, 2022 9 tweets 2 min read
A valuable thing I learned in college was a sense of how physicists use math, and how not to get distracted by the parts of math not good for modeling.

You could get a great education in many things at Caltech, but physics was the center, culturally.

1/
So unless you made an effort to avoid it, you would get exposed to physics culture as a nerdy Caltech student.

Physics culture is great, because physics is such a fantastically successful and productive modeling science.

2/
Aug 28, 2022 6 tweets 3 min read
Suppose E[X^2] and E[Y^2] exist, that E[Y | X] = X, and that E[X | Y] = Y. Then X=Y a.s.

Probabilist's interp.: a forward and backward martingale is constant.

Statistician's interp.: if a Bayes estimate with respect to quadratic loss is unbiased, its Bayes risk is 0.

1/
This is a fun little fact to prove (e.g. it's in @stat110's class and book).

I learned about its statistical interpretation, and more, from a lovely lecture by Peter Bickel at last week's @SimonsInstitute workshop day in honor of David Blackwell.

2/