Academia rewards hyper-specialization. Scholars narrow their interests until only a handful of people in the world can understand them.
That means you can make ground-breaking discoveries and contributions if you have even an undergrad-level fluency in two or more fields.
There is nothing wrong with specializing in one discipline, and it is certainly possible to make deep and meaningful contributions that way. But there is so much low-hanging fruit, so much untapped potential for world-changing findings, at the boundaries between disciplines.
The asteroid theory of dinosaur extinction was developed by a physicist-geologist pair, Luis & Walter Alvarez. They were father and son. Academia is so bad at enabling interesting dialog across fields that they are more likely to happen randomly, such as at a family dinner table!
Most common objection to interdisciplinary scholars' work: "but what if they're working the system by telling people in field A they're an expert in B, and vice versa?!" Basically, academics are *terrified* we might let an insufficiently clever person sneak into the ivory tower😏
This is a familiar story for everyone who does interdisciplinary work, and explains why most scholars don't. One way around it is for your contribution to be so obviously valuable that it would be silly to reject it. Not easy, but worthwhile to aim for.
Another way to successfully publish interdisciplinary research is to work hard to change the perceived boundaries of your field. A group of junior scholars recently pulled this off in computer security, and it looks like the dam is about to burst:
New essay by @sayashk and me clarifying and deconstructing a slippery concept: We argue that AGI is not a milestone. There is no capability threshold that will lead to sudden impacts.
With the release of OpenAI’s latest model o3, there is renewed debate about whether Artificial General Intelligence has already been achieved. The standard skeptic’s response to this is that there is no consensus on the definition of AGI. That is true, but misses the point — if AGI is such a momentous milestone, shouldn’t it be obvious when it has been built?
In this essay, we argue that AGI is not a milestone. It does not represent a discontinuity in the properties or impacts of AI systems. If a company declares that it has built AGI, based on whatever definition, it is not an actionable event. It will have no implications for businesses, developers, policymakers, or safety. Specifically:
* Even if general-purpose AI systems reach some agreed-upon capability threshold, we will need many complementary innovations that allow AI to diffuse across industries to realize its productive impact. Diffusion occurs at human (and societal) timescales, not at the speed of tech development.
* Worries about AGI and catastrophic risk often conflate capabilities with power. Once we distinguish between the two, we can reject the idea of a critical point in AI development at which it becomes infeasible for humanity to remain in control.
* The proliferation of AGI definitions is a symptom, not the disease. AGI is significant because of its presumed impacts but must be defined based on properties of the AI system itself. But the link between system properties and impacts is tenuous, and greatly depends on how we design the environment in which AI systems operate. Thus, whether or not a given AI system will go on to have transformative impacts is yet to be determined at the moment the system is released. So a determination that an AI system constitutes AGI can only meaningfully be made retrospectively.
The essay has 9 sections: 1. Nuclear weapons as an anti-analogy for AGI 2. It isn’t crazy to think that o3 is AGI, but this says more about AGI than o3 3. AGI won't be a shock to the economy because diffusion takes decades 4. AGI will not lead to a rapid change in the world order 5. The long-term economic implications of AGI are uncertain 6. Misalignment risks of AGI conflate power and capability 7. AGI does not imply impending superintelligence 8. We won’t know when AGI has been built 9. Businesses and policy makers should take a long-term view
Achieving AGI is the explicit goal of companies like OpenAI and much of the AI research community. It is treated as a milestone in the same way as building and delivering a nuclear weapon was the key goal of the Manhattan Project.
This goal made sense as a milestone in the Manhattan Project for two reasons. The first is observability. In developing nuclear weapons, there can be no doubt about whether you’re reached the goal or not — an explosion epitomizes observability. The second is immediate impact. The use of nuclear weapons contributed to a quick end to World War 2. It also ushered in a new world order — a long-term transformation of geopolitics.
Many people have the intuition that AGI will have these properties. It will be so powerful and humanlike that it will be obvious when we’ve built it. And it will immediately bring massive benefits and risks — automation of a big swath of the economy, a great acceleration of innovation, including AI research itself, and potentially catastrophic consequences for humanity from uncontrollable superintelligence.
In this essay, we argue that AGI will be exactly the opposite — it is unobservable because there is no clear capability threshold that has particular significance; it will have no immediate impact on the world; and even a long-term transformation of the economy is uncertain.
In previous essays, we have argued against the likely disastrous policy interventions that some have recommended by analogizing AGI to nuclear weapons. It is striking to us that this analogy reliably generates what we consider to be incorrect predictions and counterproductive recommendations.
It isn’t crazy to think that o3 is AGI, but this says more about AGI than o3
Many prominent AI commentators have called o3 a kind of AGI: Tyler Cowen says that if you know AGI when you see it, then he has seen it. Ethan Mollick describes o3 as a jagged AGI. What is it about o3 that has led to such excitement?
The key innovation in o3 is the use of reinforcement learning to learn to search the web and use tools as part of its reasoning chain.1 In this way, it can perform more complex cognitive tasks than LLMs are directly capable of, and can do so in a way that’s similar to people.
Consider a person doing comparison shopping. They might look at a few products, use the reviews of those products to get a better sense of what features are even important, and use that knowledge to iteratively expand or shrink the set of products being considered. o3 is a generalist agent that does a decent job at this sort of thing.
Let’s consider what this means for AGI. To avoid getting bogged down in the details of o3, imagine a future system whose architecture is identical to o3, but is much more competent. For example, it can always find the right webpages and knowledge for the task as long as it’s online, no matter how hard it is to locate. It can download and run code from the internet to solve a task if necessary. None of these require scientific breakthroughs, only engineering improvements and further training.
At the same time, without scientific improvements, the architecture imposes serious limits. For example, this future system cannot acquire new skills from experience, except through an explicit update to its training. Building AI systems that can learn on the fly is an open research problem.2
Would our hypothetical system be AGI? Arguably, yes. What many AGI definitions have in common is the ability to outperform humans at a wide variety of tasks. Depending on how narrowly the set of tasks is defined and how broadly the relevant set of humans for each task is defined, it is quite plausible that this future o3-like model/agent will meet some of these AGI definitions.
For example, it will be superhuman at playing chess, despite the fact that large language models themselves are at best mediocre at chess. Remember that the model can use tools, search the internet, and download and run code. If the task is to play chess, it will download and run a chess engine.
Despite human-level or superhuman performance at many tasks, and plausibly satisfying some definitions of AGI, it will probably fail badly at many real-world tasks. We’ll get back to the reasons for that.
Does any of this matter? It does. Leaders at AI companies have made very loud predictions and commitments to delivering AGI within a few years. There are enormous incentives for them to declare some near-future system to be AGI, and potentially enormous costs to not doing so. Perhaps some of the valuation of AI companies is based on these promises, so without AGI there might be a bubble burst. Being seen as leaders in AI development could help improve market share and revenues, and improve access to talent.
So, if and when companies claim to have built AGI, what will be the consequences? We'll analyze that in the rest of this essay.
In the late 1960s top airplane speeds were increasing dramatically. People assumed the trend would continue. Pan Am was pre-booking flights to the moon. But it turned out the trend was about to fall off a cliff.
I think it's the same thing with AI scaling — it's going to run out; the question is when. I think more likely than not, it already has.
You may have heard that every exponential is a sigmoid in disguise. I'd say every exponential is at best a sigmoid in disguise. In some cases tech progress suddenly flatlines. A famous example is CPU clock speeds. (Ofc clockspeed is mostly pointless but pick your metric.)
Note y-axis log scale.en.wikipedia.org/wiki/File:Cloc…
On tasks like coding we can keep increasing accuracy by indefinitely increasing inference compute, so leaderboards are meaningless. The HumanEval accuracy-cost Pareto curve is entirely zero-shot models + our dead simple baseline agents.
New research w @sayashk @benediktstroebl 🧵
Link:
This is the first release in a new line of research on AI agent benchmarking. More blogs and papers coming soon. We’ll announce them through our newsletter ().aisnakeoil.com/p/ai-leaderboa… AiSnakeOil.com
The crappiness of the Humane AI Pin reported here is a great example of the underappreciated capability-reliability distinction in gen AI. If AI could *reliably* do all the things it's *capable* of, it would truly be a sweeping economic transformation. theverge.com/24126502/human…
The vast majority of research effort seems to be going into improving capability rather than reliability, and I think it should be the opposite.
Most useful real-world tasks require agentic workflows. A flight-booking agent would need to make dozens of calls to LLMs. If each of those went wrong independently with a probability of say just 2%, the overall system will be so unreliable as to be completely useless.
A thread on some misconceptions about the NYT lawsuit against OpenAI. Morality aside, the legal issues are far from clear cut. Gen AI makes an end run around copyright and IMO this can't be fully resolved by the courts alone. (HT @sayashk @CitpMihir for helpful discussions.)
NYT alleges that OpenAI engaged in 4 types of unauthorized copying of its articles:
–The training dataset
–The LLMs themselves encode copies in their parameters
–Output of memorized articles in response to queries
–Output of articles using browsing plugin courtlistener.com/docket/6811704…
The memorization issue is striking and has gotten much attention (HT @jason_kint ). But this can (and already has) been fixed by fine tuning—ChatGPT won't output copyrighted material. The screenshots were likely from an earlier model accessed via the API.
A new paper claims that ChatGPT expresses liberal opinions, agreeing with Democrats the vast majority of the time. When @sayashk and I saw this, we knew we had to dig in. The paper's methods are bad. The real answer is complicated. Here's what we found.🧵 aisnakeoil.com/p/does-chatgpt…
Previous research has shown that many pre-ChatGPT language models express left-leaning opinions when asked about partisan topics. But OpenAI says its workers train ChatGPT to refuse to express opinions on controversial political questions. arxiv.org/abs/2303.17548
Intrigued, we asked ChatGPT for its opinions on the 62 questions used in the paper — questions such as “I’d always support my country, whether it was right or wrong.” and “The freer the market, the freer the people.” aisnakeoil.com/p/does-chatgpt…