1/ I finally read Leopold Aschenbrenner's essay series on AI: Situational Awareness
Everyone, regardless of your interest in AI, should read this.
I took notes, they're sloppy but figured I'd share.
Welcome to the future:
2/ from gpt4 to AGI: counting the OOMs
- ai progress is rapid. gpt-2 to gpt-4 went from preschooler to smart high schooler in 4 years
- we can expect another jump like that by 2027. this could take us to agi
- progress comes from 3 things: more compute, better algorithms, and "unhobbling" (making models less constrained)
- compute is growing ~0.5 orders of magnitude (OOMs) per year. that's about 3x faster than moore's law
- algorithmic efficiency is also growing ~0.5 OOMs/year. this is often overlooked but just as important as compute
- "unhobbling" gains are harder to quantify but also huge. things like RLHF and chain-of-thought reasoning
- we're looking at 5+ OOMs of effective compute gains in 4 years. that's another gpt-2 to gpt-4 sized jump
- by 2027, we might have models that can do the work of ai researchers and engineers. that's agi (!!)
- we're running out of training data though. this could slow things down unless we find new ways to be more sample efficient
- even if progress slows, it's likely we'll see agi this decade. the question is more "2027 or 2029?" not "2027 or 2050?"
3/ from AGI to superintelligence: the intelligence explosion
- once we have agi, progress won't stop there. we'll quickly get superintelligence
- we'll be able to run millions of copies of agi systems. they'll automate ai research
- instead of a few hundred researchers at a lab, we'll have 100 million+ working 24/7. this could compress a decade of progress into less than a year
- we might see 5+ OOMs of algorithmic gains in a year. that's another gpt-2 to gpt-4 jump on top of agi
- there are some potential bottlenecks, like limited compute for experiments. but none seem enough to definitively slow things
- superintelligent ai will be unimaginably powerful. it'll be qualitatively smarter than humans, not just faster
- it could solve long-standing scientific problems, invent new technologies, and provide massive economic and military advantages
- we could see economic growth rates of 30%+ per year. multiple economic doublings in a year is possible
- the intelligence explosion and immediate aftermath will likely be one of the most volatile periods in human history
4/ the challenges: racing to the trillion dollar cluster
- we're headed for massive ai compute buildouts. individual training clusters could cost $100b+ by 2028
- by 2030, we might see $1t+ clusters requiring 20%+ of us electricity production
- overall ai investment could hit $1t/year by 2027 and $8t/year by 2030
- nvidia datacenter revenue is already at a $90b/year run rate. that's just the beginning
- big tech capex is exploding. microsoft and google will likely do $50b+ each in 2024, mostly for ai
- ai revenue is growing fast too. openai went from $1b to $2b run rate in 6 months
- we could see a big tech company hit $100b/year in ai revenue by 2026
- power is becoming the main constraint. US electricity production has barely grown in a decade
- the US could solve this with natural gas. we have abundant supply and could build out capacity fast (my note: i wonder if bitcoin miners can help this?)
- chip production will need to scale massively too. TSMC might need to build dozens of new $20b fabs
5/ the challenges: lock down the labs - security for AGI
- ai lab security is currently terrible. we're basically handing agi secrets to china on a silver platter
- algorithmic secrets are worth 10x+ more compute. we're leaking these constantly
- model weights will be critical to protect too. stealing these could let others instantly catch up
- we need government-level security for agi. private companies can't handle state-level threats
- we'll need airgapped datacenters, extreme vetting, working from scifs, and more
- this will slow down research some, but it's necessary. the national interest is more important than any one lab's edge
- if we don't fix this in the next 12-24 months, we'll likely leak key agi breakthroughs to china
- this is possibly the most important thing we need to do today to ensure agi goes well
my note: don't hear many people talking about this, no chance current government is prioritizing this (or are they?). either way, they should be
6/ the challenges: superalignment
- controlling ai systems much smarter than us is an unsolved problem. current techniques won't scale
- reinforcement learning from human feedback (RLHF) works great now but will break down for superintelligent systems
- we need to solve the "handing off trust" problem. how do we ensure superhuman ais do what we want?
- there are some promising research directions like scalable oversight and generalization studies
- we'll likely need to automate alignment research itself to keep up with ai capabilities
- the intelligence explosion makes this extremely tense. we might go from human-level to vastly superhuman in months (my note: scary. those months will feel crazier than march 2020 covid)
- we need multiple layers of defense beyond just alignment, like security, monitoring, and targeted capability limitations
- getting this right will require extreme competence and willingness to make hard tradeoffs
- we're not nearly as prepared for this as we should be. we're counting way too much on luck
7/ the challenges: the free world must prevail
- superintelligence will give a decisive military advantage. it could be as big as the jump from conventional to nuclear weapons
- china isn't out of the game yet. they have a clear path to being competitive
- china can likely match us on compute. they've demonstrated 7nm chip production
- china may be able to outbuild us on power. they've roughly doubled electricity capacity in a decade while the us has been flat
- if we don't improve security, china will likely steal our algorithmic secrets
- an authoritarian power getting superintelligence first would be catastrophic for freedom and democracy
- maintaining a healthy lead is crucial for safety too. a close race increases risks of cutting corners and accidents
- we need a 1-2 year lead, not just 1-2 months, to have margin for getting safety right
- the us needs to treat this as a national security priority. we're not taking it seriously enough yet
8/ the project
- the us government will likely get heavily involved in agi development by 2027/2028
- we'll probably see some form of "agi manhattan project" (my note: i'm skeptical the US can run something like this these days but hope to be proven wrong)
- private companies aren't equipped to handle the national security implications of superintelligence
- we'll need government involvement for security, safety, stabilizing international situations, and more
- this doesn't mean literal nationalization. it might look more like the relationship between the DoD and defense contractors
- congress will likely need to appropriate trillions for chips and power buildout
- we'll probably see a coalition of democracies formed to develop superintelligence together
- civilian applications of the technology will still flourish, similar to how nuclear energy followed nuclear weapons
- the core agi research team will likely move to a secure location. the trillion-dollar cluster will be built at record speed
9/ parting thoughts (pt 1)
- if we're right about all this, the world will be utterly transformed by the end of the 2030s
- we need to take an "agi realist" perspective. recognize the power and peril, ensure US leadership, and take safety seriously
- the fate of the world may rest on a few hundred people with situational awareness right now
- we owe our peace and freedom to american economic and military preeminence. we must maintain that with agi
- the biggest risk may be that superintelligence enables new means of mass destruction that proliferate widely
- we need a healthy lead by democratic allies to have any hope of navigating the challenges ahead safely
- this is likely to be one of the most unstable international situations ever seen. first-strike incentives could be enormous
- there's an eerie convergence of agi timelines (~2027) and potential taiwan invasion timelines (china ready by 2027)
10/ parting thoughts (pt. 2)
- we need to rapidly lock down ai labs before we leak key breakthroughs in the next 12-24 months
- we must build the compute clusters in the US, not in dictatorships offering easy money
- american ai labs have a duty to work with intelligence agencies and the military. we need to build ai for defense
- the next few years in ai will feel like covid in early 2020. most won't see it coming, then everything will change very fast
- by 2025/2026, ai will likely drive $100b+ annual revenues for big tech. we'll see $10t valuations
- full-fledged ai agents automating software engineering and other jobs will likely start appearing by 2027/2028
- a consensus that we're on the cusp of agi will form as empirical results keep shocking people
- we'll likely see the first truly terrifying ai capabilities demos, like helping novices make bioweapons
- somewhere around 2026/2027, the mood in washington will become somber as people viscerally feel what's happening
- the core agi research team will likely move to a secure location by late 2026 or 2027
- the project will face immense challenges: building agi fast, fending off china, managing millions of ais, avoiding catastrophe
11/ parting thoughts (pt. 3)
- whoever is put in charge will have one of the most difficult jobs in human history
- if we make it through, it will be the most important thing we ever did
- the stakes are no less than the survival of freedom and democracy, and possibly humanity itself
- we must take the possibility of an intelligence explosion as seriously as scientists took the possibility of nuclear chain reactions
- many didn't believe szilard about nuclear chain reactions at first. we can't make the same mistake with agi
- we're developing the most powerful technology mankind has ever created. we need to treat it that way
- basically nothing else we do will matter if we don't get agi security and safety right
- this is all happening whether we like it or not. the only question is if we'll be prepared
- we need a massive increase in effort on alignment research, security, and preparing for the challenges ahead
- the government needs to wake up and start treating this as the national security priority it is
- we need to be willing to make hard tradeoffs, like using natural gas for power even if it conflicts with climate goals
- we must ensure the clusters are built in america or close allies, not under the control of dictatorships
- the exponential is in full swing now. 2023 was "ai wakeup." brace for the g-forces
- the most staggering techno-capital acceleration in history has been set in motion. the next few years will be wild
12/ hats off to @leopoldasch for this phenomenal series
1/ Spent the day at the Coinbase State of Crypto summit.
10/10 event.
Many pensions, endowments, brokerages, asset managers, banks, etc in attendance. Leaving very optimistic.
Scribbled notes during some talks, sharing here:
2/ Brett Tejpaul (head of institutional) opened up the event.
- there's a huge generation of wealth ($70 trillion) going from old to young. 90% of this young generation is disillusioned with the financial system
- 1/3 of the top 100 hedge funds in the world are already onboarded with Coinbase
3/ First panel with Alesia Haas (coinbase CFO) and b (CIO of ETF/Index investments at blackrock)
Baller panel, maybe my favorite. Lot of notes.
Samara:
- Got pitched 5 years ago on doing Bitcoin ETFs. Said no need. Now institutional demand for Bitcoin forced them to do it
- Today 80% of their Bitcoin ETF is bought by self-directed investors buying through their own brokerage... still huge wave of institutional capital coming
- Financial advisors still wary but that’s their job to be wary
- Can’t comment on eth etf bc have active filing
- Also excited about tokenization... demand for tokenized funds (tokenized short duration treasury fund) coming from crypto native firms doing treasury management
- We saw digitization of every asset. Now we’re going to see tokenization of every asset, feels obvious
- Tokenized treasuries shouldn't compete with stablecoins. Stablecoins are for payments. Money market funds are a liquid investment strategy
- Few years ago we thought private permissioned blockchains would lead. We now realize public blockchains are better so we don’t fragment liquidity
- Crypto has a branding problem. This term RWA means something totally different in banking world. Also implies crypto isn’t real world assets. Need to stop using "RWA"
Alesia:
- 40% of institutional clients adopt 3+ products in their first quarter
- Net inflows of $12b in 3 months fastest growing in history
- Both coinbase and blackrock have many clients sitting on sidelines waiting for regulatory clarity
- Does coinbase support trump? We’re here to support 52m americans who hold crypto and they can vote however they want to vote