Jason Abaluck Profile picture
Professor of Economics at Yale SOM
8 subscribers
Nov 15 6 tweets 2 min read
I've updated considerably on the size of the negative externality when scientists declare their political allegiances. This doesn't mean not to do it, and *certainly* doesn't mean not researching controversial issues, but being mindful of the downside of overt advocacy. The National Bureau of Economic Research has good norms about avoiding directly prescriptive language. Your paper can analyze single-payer healthcare and report costs and benefits given the model you wrote down without then saying, "And therefore we should do this!"
Nov 12 4 tweets 2 min read
The total payroll of the federal government is about $110 billion a year buff.ly/3CnrMCx

Federal government spending was $6.1 trillion buff.ly/3YLb0Vf

You cannot meaningfully shrink the federal government by firing "unelected bureaucrats" Image
Image
What is money spent on? Medicare, Medicaid and Social Security are 45%. Defense and debt payments are 28%. The VA, education and transportation are 15%. SNAP, UI, child nutrition, and the earned income tax credit are 7.5%. The remainder is stuff like military pensions. Image
Oct 24 17 tweets 3 min read
There is a genre of Twitter post that goes like:
1) I, the critiquing hero, have read a paper
2) I discovered assumptions
3) These assumptions might be wrong

Well, no shit. But can alternative stories plausibly explain the data?

@justinTpickett @eigenrobot I pick on this thread to illustrate how misleading these critiques can be -- they are often as ideologically motivated as the papers they critique.

Let's "see whats going on under the hood" of this thread:
Sep 26 6 tweets 2 min read
I think of this as the "mathematician's fallacy" -- or what the general public might think of as the "academic's fallacy" -- it's not worth investing our time until we can apply a rigorous toolkit to draw definitive conclusions. There are huge and policy-relevant questions now or in the near-future (e.g. SB 1047). The world is not going to wait on our toolkit to catch up. We need to do the best we can with the tools we have available, and develop new ones appropriate to the urgency of the question.
Sep 25 14 tweets 3 min read
As an applied economist, I'm 95% with Ben here. I think a lot of work that applied economists are doing on AI is misconceived and there needs to be a more radical reconsideration of underlying theory and models before anything useful will emerge. The kind of work by applied economists that I expect will *publish well* over the next 5 or so years are studies that say things like, "We study the introduction of the PC/electricity/cars/agriculture/etc... and use this to learn about skill-biased technological change."
Aug 24 8 tweets 2 min read
This is one of the more interesting articles I have read this year: the former dean of Harvard Medical School had a start-up in the late 1980s based on using GLP1s to treat diabetes and weight loss, but Pfizer stopped funding it despite promising early results. Two lessons: First, they apparently had promising internal scientific results that were never shared, even after the project was abandoned. How many other groundbreaking results for abandoned projects are gated within pharma companies and never see the light of day?
Jul 23 16 tweets 3 min read
If richer people work less because they are richer, this does not tell us about the (net) benefits of making poor people richer.

It doesn't inform how much we should redistribute or whether we should do so via a UBI.

Income effects are not substitution effects! Some UBI proponents say: a UBI will fix market failures. People who don't have access to credit will start businesses. People afraid to seek medical care will get needed care, improving health. The (excellent) study above finds little evidence for these.
Jun 1, 2023 15 tweets 3 min read
What will be the impacts of AI on democracy? Some speculation on:
1) Will AIs become trusted information brokers?
2) Will the market be dominated by a small number of corporations and AIs?
3) What will be the impacts of AI on the distribution of power and resources? In this thread, I'm principally concerned with the medium term impacts, when AI resembles better versions of what we have today rather than when we have AI with human-level capabilities or better across many domains (timeframe: ).
Jun 1, 2023 19 tweets 4 min read
Here is my summary of forecasted time to AGI/human-level AI from recent surveys or prediction markets (synthesizing and updating recent posts). This 2022 survey suggests a median AGI date of 2059 (aiimpacts.org/2022-expert-su…). In 2016, that date was 2061. Some AI researchers, specializing in AI and cognitive science, think that cognitive scientists better understand the challenges involved and that the above survey focuses on ML researchers ()
May 31, 2023 9 tweets 2 min read
Economists who are minimizing the danger of AI risk:
1) Although there is uncertainty, most AI researchers expect human-level AI (AGI) within the next 30-40 years (or much sooner)
2) Superhuman AI is dangerous for the same reason aliens visiting earth would be dangerous If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers. The experts might be wrong -- but it's irrational for you to assert with confidence that you know better than them.
May 19, 2023 11 tweets 2 min read
Agree with most of @DAcemogluMIT thread but not this. The threat is likely not immediate, but it is serious. The danger is not malevolent AI, but AI that is far more capable than humans at a wide-range of activities optimizing something that leads to bad outcomes for humans. Intelligence is not binary. Humans are very good at say 10,000 cognitive tasks (the specific number is made up). GPT-4 is very good at 50, and superhuman at a handful. Subsequent models will be more capable at a broader range of tasks.
May 5, 2023 13 tweets 3 min read
Using the value of a statistical life seems like a fraught way of making decisions until you consider how much better it is than every alternative that has been proposed. Any particular implementation will involve controversial normative and empirical assumptions -- is efficiency always desirable, will transfers happen to achieve efficiency in practice (no), were people informed about risk when we estimated their willingness to pay, etc...
Apr 27, 2023 9 tweets 3 min read
Interesting work from @ChadJonesEcon but there are two key additional ingredients I would like to see in such models:
1) Marginal value of $ or time invested in AI safety (as a function of AI progress?)
2) Option value of delay in light of 1) I especially like that @ChadJonesEcon considers the value of potential mortality reductions from AI, which seems essential and overlooked in this debate, and that now needs to be integrated with the considerations above.
Apr 27, 2023 10 tweets 2 min read
One response I haven't seen yet: your existential crisis is probably correct, your research is probably wrong or ill-motivated, and you should fix this by asking better questions, getting better data, or finding a better identification strategy (probably all three). A cardinal and understandable mistake that grad students make is to think: "I have been working on X project for 2 years and it is my first project. If I don't have a paper, I have accomplished nothing. So, I'll keep trying to write a paper I know should not be written."
Apr 4, 2023 10 tweets 2 min read
I expect in the next decade or so many more papers in health economics taking seriously that doctors, hospitals, and (to some degree) insurance plans are rivalrous -- so we need to model extensive margins or comparative advantage to understand systemwide efficiency. Some examples:
1) Medicare increases reimbursement -- are there more doc hours or are non-Medicare patients crowded out?
2) Do narrow networks -> better matching of patients to physicians?
3) Copays reduce office visits for some -- does access improve for others?
Mar 30, 2023 11 tweets 3 min read
What is especially preposterous about this is that we know preventive care is exactly the kind of thing that health insurance markets will underprovide if not properly regulated -- so this ruling says, "It's not legal to have well-functioning health insurance markets." We know that cost-sharing leads people to indiscriminately cut back on care. They don't just cut back marginal stuff -- they also do less of valuable stuff. We saw this in the RAND experiment (below):
Mar 27, 2023 12 tweets 4 min read
To be clear, I would distinguish between:
1) It makes sense to invest resources in incentivizing & rewarding AI safety (as judged by experts)
2) We should regulate AI to slow down its development until our understanding "catches up" The value of rewarding innovations likely to make AI safer seems very high to me given recent developments (although I completely agree with everyone who emphasizes *uncertainty* both about AGI timelines and consequences).
Mar 25, 2023 18 tweets 3 min read
What regulatory options make sense to reduce risk from AI? My tentatively preferred option is to allocate at least $100 billion a year or more for rewards and grants for AI safety innovations, assessed by a board of relevant CEOs and AI researchers (i.e. people with inside info). I'm not at all sure this is the right solution -- but I'm confident this is a regulatory problem that economists and policy-makers urgently need to attend to. Don't be misled if you don't like EAs or rationalists.
Feb 13, 2023 14 tweets 3 min read
While it would certainly be nice to have more RCTs, this article misses the key point @LizHighleyman : there are powered studies -- like our study in Bangladesh and many quasi-experimental studies -- which find effects. There are underpowered studies that don't find effects. The underpowered studies often have extremely low compliance. You might say, "Ah, doesn't this mean masks don't work in practice because no one complies?" No, because sometimes there is high compliance -- but it's not achieved through the methods in the underpowered RCTs!
Feb 7, 2023 11 tweets 2 min read
Traffic seems like an area where some form of social impact bond might be very beneficial. In other words, the government should commit to paying private firms a portion of the value they create if they can prove they can save people a ton of time. For example, how frequently are traffic light timings updated in real-time in response to GPS data from Waze, google, etc...?
Jan 31, 2023 11 tweets 2 min read
This is a totally misleading assessment that is mostly just repeating the same exact errors they made in 2020 (the vast majority of the studies reviewed are influenza studies from before that period). There are two main deficiencies: a) the vast majority of these studies used only self-reported mask-wearing data. In most, there was likely almost no change in mask-wearing between treatment and control groups! b) community and individual effects conflated