To take an example from the graduate curriculum, we spend a lot of time dealing with fixed points in large models (dynamic, stochastic, etc.)
But most students taught our toolbox could say nothing useful about how to model whether agents can find these equilibria.
2/
Some famous work in computer science, e.g. by Daskalakis and Papadrimiou, has studied the complexity of finding equilibria in games and markets.
But economics has not absorbed much from the methods or concepts used in this work, and has mostly shrugged off the whole thing
3/
Phase transitions/emergent properties are a very important idea in network theory, which is one of the most exciting areas of economic theory.
But most economists regard the math behind all this as exotic - the vibes are that this stuff is weird/different
4/
To be clear, economics is not closed to these ideas. In each case discussed above, and many like it, economists have published papers using these "offbeat" methods.
But the mainstream toolkit in economics is not seriously expanding in any of these directions.
5/
If the math used in economics were more influenced by the other successful "applied math" fields (computer science, physics, mathematical biology) it would look different, and would, in my view, be much better.
6/
There's an argument that people like Hartford should be ignored, because they weren't smart enough to understand the math.
It's true some parts of his argument, like the skepticism about constrained optimization, are silly.
But the overall vibes of his argument are right.
7/
We need to be a little less conservative and have a toolkit that looks less like Bourbaki and more like modern statistical mechanics, computer science, statistical learning theory, etc.
8/8
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I don't care at all about homework being done with AI since most of the grade is exams, so this takes out the "cheating" concern.
Students seem motivated to learn and understand, which makes the class very similar to before despite availability of an answer oracle.
2/
It's possible that (A) all the skills I'm trying to teach will be automated, not just the problem sets AND (B) nobody will need to know them and (C) nobody will want to know them.
Notice: A doesn't imply B and B doesn't imply C.
3/
A survey of what standard models of production and trade are missing, and how network theory can illuminate fragilities like the ones unfolding right now, where market expectations seem to fall off a cliff.
When AGI arrives and replaces all human work, there won't be human sports.
Instead of watching humans play basketball, we'll watch humanoid robots play basketball; robots will, after all, play better.
Similarly, robot jockeys will ride robot horses at the racetrack.
1/
There won't be humans getting paid to compete in chess tournaments.
MagnusGPT will not only play better than any human plays today, but also make that characteristic smirk and swivel his head around in that weird way.
2/
There certainly won't be humans getting paid to work as nurses for the sick and dying, because robots with soft hands will provide not only sponge baths but better (superhuman!) company and comfort.
3/
Played around with OpenAI Deep Research today. Thoughts:
1. Worst: asked it to find the fourth woman ever elected to Harvard's Society of Fellows - simple reasoning was required to assess ambiguous names. Gave wrong person. High school intern would do better.
1/
2. Asked it to list all economists at top 15 econ departments in a specific subfield w/ their citation counts. It barely figured out the US News ranking, its list of people was incomplete, and it ran into problems accessing Google Scholar so cites were wrong/approximate.
2/
3. Asked it to find excerpts of bad academic writing of at least 300 words each.
Thought for 10 minutes, came up with stuff like this (obviously non-compliant with request).