Daniel Eth (yes, Eth is my actual last name) Profile picture
Researching effects of automated AI R&D | pro-America, pro-tech, & pro-AI safety
Sep 16 19 tweets 3 min read
Recently, major AI industry players (incl. a16z, Meta, & OpenAI’s Greg Brockman) announced >$100M in spending on pro-AI super PACs. This is an attempt to copy a wildly successful strategy from the crypto industry, to intimidate politicians away from pursuing AI regulations.🧵 First, some context. This is not normal. Only one industry has ever spent this much on election spending - the crypto industry spent similar sums in 2024 through the super PAC Fairshake. (The only super PACs that spend more are partisan & associated with one party/candidate.)
Aug 23 7 tweets 3 min read
My sense is there’s generally a power law between “inputs” and “outputs” to technological progress. In this context, that manifests as “exponential increases in inputs over time yields smooth exponential increase in time horizons over time” (ie straight line on semi-log plot)
🧵 Why should there be a power law? We actually see this sort of dynamic come up all the time in technological progress - from experience curve effects (think declining PV prices) to GDP growth to efficiency improvements in various AI domains over time to AI scaling laws Image
Image
Image
May 24 8 tweets 2 min read
Imho this point is overstated. First off, algorithmic efficiency improvements have been large (substantial fraction of the compute scale up factor) and can still allow for effective scale up. Second, the “unhobblings” could take multiple years. On the first point - Epoch finds that in language models, pretraining algorithmic progress has been around half as impactful as compute scale up. Naively, if compute scale up stopped, progress would slow down by 3x. This is a decent amount, but not enough to say “2030 or bust” Image
Mar 26 14 tweets 4 min read
New report:
“Will AI R&D Automation Cause a Software Intelligence Explosion?” 

As AI R&D is automated, AI progress may dramatically accelerate. Skeptics counter that hardware stock can only grow so fast. But what if software advances alone can sustain acceleration? 🧵 Image If AI R&D is fully automated, there will be a positive feedback loop: AI performs AI R&D -> AI progress  -> better AI does AI R&D -> etc. 

Empirical evidence suggests this feedback loop could cause an intelligence explosion despite diminishing returns.

forethought.org/research/will-…
Apr 20, 2023 4 tweets 2 min read
I created graphs based on the AI X-risk survey results from Zach Stein-Perlman, @benwr, & @KatjaGrace of @AIImpacts. These figures illustrate the distribution of survey responses. (Note, I rounded responses to the nearest percent, & one response of "<1%" was rounded down to 0%.) @benwr @KatjaGrace @AIImpacts Here's the graph for the other wording:
Mar 31, 2023 4 tweets 1 min read
When list of lethalities and/or death with dignity first came out (honestly forget which one), my initial reaction was irritation. I felt like the argument could have been phrased differently, without being so angry, and I worried about a backlash. But pretty quickly... my reaction reversed. My sense is people ended up taking the piece as a wakeup call to focus a bit more on important parts of the problem, and it expanded discourse in helpful directions
Mar 31, 2023 6 tweets 2 min read
"Sure, maybe you can get liberals on board with your government regulation, but you'll never get pro-market conservatives on board"
Pro-market conservatives: "Yes, maybe you'll get them to do *something* about AI, but it's such a complicated issue that they'll totally misunderstand the issues at play"
Mar 29, 2023 4 tweets 1 min read
I really wish FLI did a better job of gathering community input from the alignment community before doing big things (Note: community input does not have to mean "we get 100% consensus on everything", but it should include "we hear arguments against our current thing" and may include "we only include aspects that the median inputter approves of")
Feb 13, 2023 15 tweets 5 min read
Excited to announce an explainer I wrote about AI alignment! I'm really happy with how this turned out – I think the compositional logic I used in the piece makes the arguments particularly clear.
(Thread)
agisafetyfundamentals.com/alignment-intr… In it, I break down the case into 5 points:
1) Advanced AI is possible
2) Advanced AI might not be that far away
3) Advanced AI might be difficult to direct
4) Poorly-directed advanced AI could be catastrophic for humanity
5) There are steps we can take now to reduce the danger
Dec 9, 2022 4 tweets 1 min read
IMHO, it's a mistake for alignment researchers to talk about aligning AGI with "humanity" or "human values" or "morality" – talk should be about aligning AGI with its designers' intent. Yes, those other things are important, but they involve different work than alignment research Naturally, talk about aligning AGI with "morality"/"human values" leads to the question "according to who?" Talk about aligning it with the intents of its designers makes clear why we should focus on technical problems at all, and ALSO this makes clear that...