Alex Imas Profile picture
Professor at @ChicagoBooth. Economics + Applied AI. Visiting Princeton 2025-2026 academic year. Essays: https://t.co/9qSiQxuFtC
Jan 7 10 tweets 5 min read
New post on whether advanced AI/AGI can lead to negative economic growth, focusing on the role of market demand.

Intuition is simple: if AI automates most jobs, who will be left to buy the products being produced—even if they do become much cheaper? Will firms continue to invest in capital if they don’t think there will be enough demand for their products?

This type of intuition can be found across influential books and in the popular press. So I spent some time writing down economic models to see what conditions are required for such demand collapse to occur.🧵Image Link:

Turns out these conditions do exist. I discuss two possibilities in the essay. The first considers demand collapse. Automation leads to the redistribution away from high MPC workers to wealthy, low MPC capital owners. substack.com/home/post/p-18…Image
Dec 8, 2025 9 tweets 4 min read
How did we actually study these questions?

We set up experimental marketplace where buyer and seller principals either negotiate directly or delegate to AI agents. Negotiation takes place within set zone of potential agreement (max-min reserve prices).
We compare outcomes of AI-agentic interactions to human-human benchmark. 1/n Why this setting?

We have ground truth on performance: Buyer/Seller surplus. This setting *induces* preferences, meaning that all Buyer and Seller principals are maximizing the exact same objective function. This allows us to make tight hypotheses on sources of heterogeneity in outcomes, since things like different tastes matters less: All you’re trying to do is maximizing surplus.

Easy to establish human to human benchmark by having people engaged in the same exact negotiation with the exact same parameters. 2/n
Nov 29, 2025 11 tweets 4 min read
Here is why I’m worried about AI-driven labor disruption and why I think economists should fight for a seat at the policy table.

TL;DR: AI is general purpose tech, disruption can happen at diff scale than before. Step back, let ‘er rip and analyze later may be disastrous.

1/n
Nearly everyone actually working in AI agrees that the tech will have major labor market disruptions, on a scale not seen before. Even if you don’t believe in AGI, you don’t need much model improvement for tech to have major impact as orgs learn to use it. 2/n

Oct 15, 2025 10 tweets 3 min read
@R_Thaler and I wrote a book about behavioral economics

You should get it if you:
- Want to learn where BE started, how far it’s come, and where (we think) it’s going
- Teach BE (comes w/ slides+materials)
- Curious about replicability (we replicated main studies)

🧵w/ links Image You may be thinking, I've seen/read the Winner's Curse before.

Original contained @R_Thaler's Anomalies columns for JEP.

The new book has ~2/3 new content. 2/n

amazon.com/Winners-Curse-…
Sep 24, 2025 16 tweets 5 min read
I’ve heard this comment several times in replies, so let me explain why Marx’s labor theory of value has no empirical content i.e., no observable inputs/outputs for testing or prediction. But is still conceptually useful as means of thinking about power in systems of exchange 🧵 In Capital Vol 1, Marx build’s on Ricardo’s theory of value, stating that the value of commodity is proportional to the “socially necessary labor time” required for its production.
Jun 15, 2025 9 tweets 2 min read
I think the issue is the "nudge" movement has misunderstood the point of the original "libertarian paternalism" paper and the "nudge" book. Original works argued that seemingly irrelevant factors can have an impact on behavior when *a set of conditions are met*🧵 The original work *did not* argue that any seemingly irrelevant factor would have a huge effect everywhere. You first had to do a ton of research to see if a nudge was appropriate in the first place. Want to test a salience intervention for taxes? First make sure people are inattentive to them. What to test a reminder intervention? Make sure that imperfect recall is a problem in the setting!
Mar 13, 2025 5 tweets 2 min read
There is a field experiment showing this exact effect. Introducing GPT tutors increases performance by *a lot*--students seem to be picking up the material much faster--but when GPT is removed those who had access perform *much worse* compared to those w/o access. 1/4 Image Brings question: what do we want students to learn in our classes? There is an argument that students will always have access to GPT, so assignments should allow access.

But another view is that the class is not just teaching material, but how to *think* through problems. 2/4