Jason Abaluck Profile picture
Professor of Economics at Yale SOM
8 subscribers
Dec 22 8 tweets 2 min read
I suspect we are about to enter an interim period where AI exceeds human performance on many cognitive tasks, but this is not common knowledge, and so most people and institutions act like this is not generally the case. This may well already be true of self-driving cars. I think it is going to be true for large swaths of academia, government and industry, but even blind testing won't persuade -- many people will insist that nothing generalizes beyond the very specific context studied.
Dec 10 12 tweets 3 min read
A book review of Tomorrow and Tomorrow and Tomorrow by Gabrielle Zevin.

This is probably the first book review you will read that has absolutely no spoilers and that you will appreciate equally whether you have read the book or not (at least if you read to the end). The first few characters introduced in the book are named after characters from James Joyce novels -- I read Portrait of the Artist for a high school class and had enough of a passing familiarity with his other work to recognize the names.
Dec 6 9 tweets 2 min read
You cannot have:
a) Low healthcare costs
b) No consumer cost sharing
c) Doctors do everything they think benefits patients
d) No insurer oversight If consumers have no cost-sharing and doctors do everything with positive benefit, many procedures will get done (e.g. marginal scans) that have high costs relative to their actual benefit, and this shows up in higher premiums.
Nov 25 4 tweets 1 min read
I think @DrJBhattacharya is a good appointment compared to the other options. He is less honest than the average health economist (partly due to self-deception), but he would be a voice of reason compared to RFK & company. He is at least trained to evaluate evidence. His wrongness about Covid was due to overconfidence in his ability to model and calculate, but that's mostly not what he would be doing in an administrative position.
Nov 15 6 tweets 2 min read
I've updated considerably on the size of the negative externality when scientists declare their political allegiances. This doesn't mean not to do it, and *certainly* doesn't mean not researching controversial issues, but being mindful of the downside of overt advocacy. The National Bureau of Economic Research has good norms about avoiding directly prescriptive language. Your paper can analyze single-payer healthcare and report costs and benefits given the model you wrote down without then saying, "And therefore we should do this!"
Nov 12 4 tweets 2 min read
The total payroll of the federal government is about $110 billion a year buff.ly/3CnrMCx

Federal government spending was $6.1 trillion buff.ly/3YLb0Vf

You cannot meaningfully shrink the federal government by firing "unelected bureaucrats" Image
Image
What is money spent on? Medicare, Medicaid and Social Security are 45%. Defense and debt payments are 28%. The VA, education and transportation are 15%. SNAP, UI, child nutrition, and the earned income tax credit are 7.5%. The remainder is stuff like military pensions. Image
Oct 24 17 tweets 3 min read
There is a genre of Twitter post that goes like:
1) I, the critiquing hero, have read a paper
2) I discovered assumptions
3) These assumptions might be wrong

Well, no shit. But can alternative stories plausibly explain the data?

@justinTpickett @eigenrobot I pick on this thread to illustrate how misleading these critiques can be -- they are often as ideologically motivated as the papers they critique.

Let's "see whats going on under the hood" of this thread:
Sep 26 6 tweets 2 min read
I think of this as the "mathematician's fallacy" -- or what the general public might think of as the "academic's fallacy" -- it's not worth investing our time until we can apply a rigorous toolkit to draw definitive conclusions. There are huge and policy-relevant questions now or in the near-future (e.g. SB 1047). The world is not going to wait on our toolkit to catch up. We need to do the best we can with the tools we have available, and develop new ones appropriate to the urgency of the question.
Sep 25 14 tweets 3 min read
As an applied economist, I'm 95% with Ben here. I think a lot of work that applied economists are doing on AI is misconceived and there needs to be a more radical reconsideration of underlying theory and models before anything useful will emerge. The kind of work by applied economists that I expect will *publish well* over the next 5 or so years are studies that say things like, "We study the introduction of the PC/electricity/cars/agriculture/etc... and use this to learn about skill-biased technological change."
Aug 24 8 tweets 2 min read
This is one of the more interesting articles I have read this year: the former dean of Harvard Medical School had a start-up in the late 1980s based on using GLP1s to treat diabetes and weight loss, but Pfizer stopped funding it despite promising early results. Two lessons: First, they apparently had promising internal scientific results that were never shared, even after the project was abandoned. How many other groundbreaking results for abandoned projects are gated within pharma companies and never see the light of day?
Jul 23 16 tweets 3 min read
If richer people work less because they are richer, this does not tell us about the (net) benefits of making poor people richer.

It doesn't inform how much we should redistribute or whether we should do so via a UBI.

Income effects are not substitution effects! Some UBI proponents say: a UBI will fix market failures. People who don't have access to credit will start businesses. People afraid to seek medical care will get needed care, improving health. The (excellent) study above finds little evidence for these.
Jun 1, 2023 15 tweets 3 min read
What will be the impacts of AI on democracy? Some speculation on:
1) Will AIs become trusted information brokers?
2) Will the market be dominated by a small number of corporations and AIs?
3) What will be the impacts of AI on the distribution of power and resources? In this thread, I'm principally concerned with the medium term impacts, when AI resembles better versions of what we have today rather than when we have AI with human-level capabilities or better across many domains (timeframe: ).
Jun 1, 2023 19 tweets 4 min read
Here is my summary of forecasted time to AGI/human-level AI from recent surveys or prediction markets (synthesizing and updating recent posts). This 2022 survey suggests a median AGI date of 2059 (aiimpacts.org/2022-expert-su…). In 2016, that date was 2061. Some AI researchers, specializing in AI and cognitive science, think that cognitive scientists better understand the challenges involved and that the above survey focuses on ML researchers ()
May 31, 2023 9 tweets 2 min read
Economists who are minimizing the danger of AI risk:
1) Although there is uncertainty, most AI researchers expect human-level AI (AGI) within the next 30-40 years (or much sooner)
2) Superhuman AI is dangerous for the same reason aliens visiting earth would be dangerous If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers. The experts might be wrong -- but it's irrational for you to assert with confidence that you know better than them.
May 19, 2023 11 tweets 2 min read
Agree with most of @DAcemogluMIT thread but not this. The threat is likely not immediate, but it is serious. The danger is not malevolent AI, but AI that is far more capable than humans at a wide-range of activities optimizing something that leads to bad outcomes for humans. Intelligence is not binary. Humans are very good at say 10,000 cognitive tasks (the specific number is made up). GPT-4 is very good at 50, and superhuman at a handful. Subsequent models will be more capable at a broader range of tasks.
May 5, 2023 13 tweets 3 min read
Using the value of a statistical life seems like a fraught way of making decisions until you consider how much better it is than every alternative that has been proposed. Any particular implementation will involve controversial normative and empirical assumptions -- is efficiency always desirable, will transfers happen to achieve efficiency in practice (no), were people informed about risk when we estimated their willingness to pay, etc...
Apr 27, 2023 9 tweets 3 min read
Interesting work from @ChadJonesEcon but there are two key additional ingredients I would like to see in such models:
1) Marginal value of $ or time invested in AI safety (as a function of AI progress?)
2) Option value of delay in light of 1) I especially like that @ChadJonesEcon considers the value of potential mortality reductions from AI, which seems essential and overlooked in this debate, and that now needs to be integrated with the considerations above.
Apr 27, 2023 10 tweets 2 min read
One response I haven't seen yet: your existential crisis is probably correct, your research is probably wrong or ill-motivated, and you should fix this by asking better questions, getting better data, or finding a better identification strategy (probably all three). A cardinal and understandable mistake that grad students make is to think: "I have been working on X project for 2 years and it is my first project. If I don't have a paper, I have accomplished nothing. So, I'll keep trying to write a paper I know should not be written."
Apr 4, 2023 10 tweets 2 min read
I expect in the next decade or so many more papers in health economics taking seriously that doctors, hospitals, and (to some degree) insurance plans are rivalrous -- so we need to model extensive margins or comparative advantage to understand systemwide efficiency. Some examples:
1) Medicare increases reimbursement -- are there more doc hours or are non-Medicare patients crowded out?
2) Do narrow networks -> better matching of patients to physicians?
3) Copays reduce office visits for some -- does access improve for others?
Mar 30, 2023 11 tweets 3 min read
What is especially preposterous about this is that we know preventive care is exactly the kind of thing that health insurance markets will underprovide if not properly regulated -- so this ruling says, "It's not legal to have well-functioning health insurance markets." We know that cost-sharing leads people to indiscriminately cut back on care. They don't just cut back marginal stuff -- they also do less of valuable stuff. We saw this in the RAND experiment (below):
Mar 27, 2023 12 tweets 4 min read
To be clear, I would distinguish between:
1) It makes sense to invest resources in incentivizing & rewarding AI safety (as judged by experts)
2) We should regulate AI to slow down its development until our understanding "catches up" The value of rewarding innovations likely to make AI safer seems very high to me given recent developments (although I completely agree with everyone who emphasizes *uncertainty* both about AGI timelines and consequences).