Jason Abaluck Profile picture
Professor of Economics at Yale SOM
Antone Johnson • @antonej or @antone everywhere 👋 Profile picture Dame Chris🌟🇺🇦😷 #RejoinEU #FBPE #GTTO🔶️ Profile picture Jacob Wallace Profile picture Kingdom and Covenant Profile picture ARP Profile picture 8 subscribed
Jun 1, 2023 15 tweets 3 min read
What will be the impacts of AI on democracy? Some speculation on:
1) Will AIs become trusted information brokers?
2) Will the market be dominated by a small number of corporations and AIs?
3) What will be the impacts of AI on the distribution of power and resources? In this thread, I'm principally concerned with the medium term impacts, when AI resembles better versions of what we have today rather than when we have AI with human-level capabilities or better across many domains (timeframe: ).
Jun 1, 2023 19 tweets 4 min read
Here is my summary of forecasted time to AGI/human-level AI from recent surveys or prediction markets (synthesizing and updating recent posts). This 2022 survey suggests a median AGI date of 2059 (aiimpacts.org/2022-expert-su…). In 2016, that date was 2061. Some AI researchers, specializing in AI and cognitive science, think that cognitive scientists better understand the challenges involved and that the above survey focuses on ML researchers ()
May 31, 2023 9 tweets 2 min read
Economists who are minimizing the danger of AI risk:
1) Although there is uncertainty, most AI researchers expect human-level AI (AGI) within the next 30-40 years (or much sooner)
2) Superhuman AI is dangerous for the same reason aliens visiting earth would be dangerous If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers. The experts might be wrong -- but it's irrational for you to assert with confidence that you know better than them.
May 19, 2023 11 tweets 2 min read
Agree with most of @DAcemogluMIT thread but not this. The threat is likely not immediate, but it is serious. The danger is not malevolent AI, but AI that is far more capable than humans at a wide-range of activities optimizing something that leads to bad outcomes for humans. Intelligence is not binary. Humans are very good at say 10,000 cognitive tasks (the specific number is made up). GPT-4 is very good at 50, and superhuman at a handful. Subsequent models will be more capable at a broader range of tasks.
May 5, 2023 13 tweets 3 min read
Using the value of a statistical life seems like a fraught way of making decisions until you consider how much better it is than every alternative that has been proposed. Any particular implementation will involve controversial normative and empirical assumptions -- is efficiency always desirable, will transfers happen to achieve efficiency in practice (no), were people informed about risk when we estimated their willingness to pay, etc...
Apr 27, 2023 9 tweets 3 min read
Interesting work from @ChadJonesEcon but there are two key additional ingredients I would like to see in such models:
1) Marginal value of $ or time invested in AI safety (as a function of AI progress?)
2) Option value of delay in light of 1) I especially like that @ChadJonesEcon considers the value of potential mortality reductions from AI, which seems essential and overlooked in this debate, and that now needs to be integrated with the considerations above.
Apr 27, 2023 10 tweets 2 min read
One response I haven't seen yet: your existential crisis is probably correct, your research is probably wrong or ill-motivated, and you should fix this by asking better questions, getting better data, or finding a better identification strategy (probably all three). A cardinal and understandable mistake that grad students make is to think: "I have been working on X project for 2 years and it is my first project. If I don't have a paper, I have accomplished nothing. So, I'll keep trying to write a paper I know should not be written."
Apr 4, 2023 10 tweets 2 min read
I expect in the next decade or so many more papers in health economics taking seriously that doctors, hospitals, and (to some degree) insurance plans are rivalrous -- so we need to model extensive margins or comparative advantage to understand systemwide efficiency. Some examples:
1) Medicare increases reimbursement -- are there more doc hours or are non-Medicare patients crowded out?
2) Do narrow networks -> better matching of patients to physicians?
3) Copays reduce office visits for some -- does access improve for others?
Mar 30, 2023 11 tweets 3 min read
What is especially preposterous about this is that we know preventive care is exactly the kind of thing that health insurance markets will underprovide if not properly regulated -- so this ruling says, "It's not legal to have well-functioning health insurance markets." We know that cost-sharing leads people to indiscriminately cut back on care. They don't just cut back marginal stuff -- they also do less of valuable stuff. We saw this in the RAND experiment (below):
Mar 27, 2023 12 tweets 4 min read
To be clear, I would distinguish between:
1) It makes sense to invest resources in incentivizing & rewarding AI safety (as judged by experts)
2) We should regulate AI to slow down its development until our understanding "catches up" The value of rewarding innovations likely to make AI safer seems very high to me given recent developments (although I completely agree with everyone who emphasizes *uncertainty* both about AGI timelines and consequences).
Mar 25, 2023 18 tweets 3 min read
What regulatory options make sense to reduce risk from AI? My tentatively preferred option is to allocate at least $100 billion a year or more for rewards and grants for AI safety innovations, assessed by a board of relevant CEOs and AI researchers (i.e. people with inside info). I'm not at all sure this is the right solution -- but I'm confident this is a regulatory problem that economists and policy-makers urgently need to attend to. Don't be misled if you don't like EAs or rationalists.
Feb 13, 2023 14 tweets 3 min read
While it would certainly be nice to have more RCTs, this article misses the key point @LizHighleyman : there are powered studies -- like our study in Bangladesh and many quasi-experimental studies -- which find effects. There are underpowered studies that don't find effects. The underpowered studies often have extremely low compliance. You might say, "Ah, doesn't this mean masks don't work in practice because no one complies?" No, because sometimes there is high compliance -- but it's not achieved through the methods in the underpowered RCTs!
Feb 7, 2023 11 tweets 2 min read
Traffic seems like an area where some form of social impact bond might be very beneficial. In other words, the government should commit to paying private firms a portion of the value they create if they can prove they can save people a ton of time. For example, how frequently are traffic light timings updated in real-time in response to GPS data from Waze, google, etc...?
Jan 31, 2023 11 tweets 2 min read
This is a totally misleading assessment that is mostly just repeating the same exact errors they made in 2020 (the vast majority of the studies reviewed are influenza studies from before that period). There are two main deficiencies: a) the vast majority of these studies used only self-reported mask-wearing data. In most, there was likely almost no change in mask-wearing between treatment and control groups! b) community and individual effects conflated
Jan 27, 2023 7 tweets 2 min read
Would love to see the before and after version of something like the below graph from pubs.aeaweb.org/doi/pdfplus/10…. In other words, are we better aligning the amount we pay for drugs with their value? In a very simple model where every drug has the same value for everyone, we ignore substitutes, etc..., dynamic incentives are optimized if the price that pharma companies receive is exactly the value generated (then PDV(profits) = PDV(social surplus)).
Dec 17, 2022 5 tweets 1 min read
I am very excited about this project and think this is a great opportunity for RAs interested in health or development. The idea is to understand for which patients high-skill providers add value and when a lower-skill provider + an AI tool is as good as a physician. In our current design (pre-pilot), patients will be randomized to providers of different skill levels (community health workers, nurses, local physicians) with or w/o an AI designed to imitate docs. Then the same patients will see a second provider, a local physician.
Aug 31, 2022 8 tweets 2 min read
I think @shengwuli is right here and this point is underappreciated. It's just as much of a mistake to say there is no such thing as objectivity or neutrality as it is to say that analysts can be perfectly neutral and analyze policy without any normative assumptions at all. Surely, we can recognize the difference between a researcher who is mostly trying to figure out if costs exceed benefits and an advocate with strong incentives to decide in one direction who occasionally lets themselves be swayed by the evidence only when it is incontrovertible.
Aug 28, 2022 19 tweets 5 min read
Direct analysis of the costs and benefits of alternative policies is undersupplied in the public debate relative to speculation about politics. We need more of the former and less of the latter @tylercowen Tyler's response here strikes me as (uncharacteristically?) overconfident about his ability to forecast politics. If congress today debated a min wage, would labor unions largely get their way or would Kyrsten Sinema push for a small increase?
Aug 22, 2022 32 tweets 6 min read
I thought I'd try to reconstruct Bryan's discussion with a 13-year-old about the minimum wage based on the below tweet:

(@bryan_caplan let me know if anything is inaccurate) BRYAN: people want to work, but only if they get paid enough. Firms want to hire workers, but only if they're not too expensive. The market wage is the wage at which the # of workers that firms want to hire and the # of workers who want to work is the same.
Aug 9, 2022 4 tweets 1 min read
Does anyone know if there is a recent article breaking down sector by sector how US healthcare spending compares to other countries? I use a McKinsey article from 2011 in my class (weirdly no longer available online?) that has charts like:
Aug 9, 2022 13 tweets 2 min read
An often overlooked fact in philosophy and political science: every human being ever born is too stupid to remember or account for why they have their current beliefs. This explains a lot of persistent disagreement. If you have a political commitment, it *feels* internally like this commitment is supported by an overwhelming number of reasons. It's been confirmed again and again by fact after fact and experience after experience.