1/ My @TEDx talk “What Do We Owe AI?” is now live! AI is advancing fast, and our relationships with AI systems are changing too. Some think AI could soon be sentient and deserve care. Are they right? What if the only honest answer is: maybe? 🧵+🔗👇
2/ Based on my 2024 report with @rgblong and others, this talk makes three basic points: (1) we have deep uncertainty about the nature of sentience, (2) we have deep uncertainty about the future of AI, and (3) when in doubt, we should exercise caution.
3/ First, we have uncertainty about the nature of sentience. Some experts think that only biological, carbon-based beings can have feelings. Others think that sufficiently advanced artificial, silicon-based beings can have feelings too. Which view is right?
4/ We may never know for sure. The only mind that any of us can directly access is our own, and we have a lot of bias and ignorance about other minds, including a tendency to (a) over-attribute sentience to some nonhumans and (b) under-attribute it to others.
5/ This situation calls for humility. We may lean one way or the other, but we should keep an open mind. Even if you feel confident that only biological beings can feel, you should allow for at least a realistic chance that sufficiently advanced artificial beings can feel, too.
6/ Second, we have uncertainty about the future of AI. Companies are spending billions on progress. They aim for intelligence, not sentience, but intelligence and sentience may overlap. Some think AI will slow down, others think it will speed up. Which view is right?
7/ We may not know until it happens. Technology is hard to predict. In 2015, many doubted that AI systems would be able to have conversations, produce essays and music, and pass standardized tests in a range of fields within a decade. Yet here we are.
8/ This situation calls for humility as well. Even if you feel confident that progress in AI will slow from here, you should allow for at least a realistic chance that it will speed up or stay the same, and that AI systems with human-like capabilities will exist by, say, 2035.
9/ Yet if this analysis is correct, then AI sentience is not an issue only for sci-fi or the distant future. There is at least a non-negligible chance that AI systems with real feelings could emerge in the near future, given current evidence. What do we do with that possibility?
10/ Fortunately, we have tools for making high-stakes decisions with uncertain outcomes. When there is a non-negligible chance that an action or policy will cause harm, we can assess the evidence and take reasonable, proportionate steps to mitigate risk.
11/ We use these tools in a variety of domains. We do it to address drug side effects, pandemic risks, and climate change risks. Increasingly, we do it to address animal welfare risks and AI safety risks. In the future, we can do it to address AI welfare risks as well.
12/ For companies and governments, taking AI welfare seriously means acknowledging that AI welfare is a credible issue, assessing AI systems for welfare-relevant features, and preparing policies for treating AI systems with an appropriate level of moral concern.
13/ For the rest of us: We can accept that we may be the first generation to co-exist with real sentient AI. Either way, we can expect to keep making mistakes about AI sentience. Preparing now — cultivating calibrated attitudes and reactions — is important for everyone.
14/ I recorded this talk last year. Since then, we released “Taking AI Welfare Seriously,” and @AnthropicAI hired one of the authors as an AI welfare researcher, launched an AI welfare program, and (with @eleosai) conducted AI welfare evals. Other actors entered the space too.
15/ Public attention has exploded as well. Many now experience chatbots as sentient, and experts are rightly sounding the alarm about over-attribution risks, including Microsoft AI CEO @mustafasuleyman in his recent essay on “seemingly conscious AI.”
16/ FWIW, I agree with Suleyman on many issues, including: (1) Over-attribution risks are more likely at present, (2) We should avoid creating sentient AI unless we can do so responsibly, and (3) We should avoid creating non-sentient AI that seems sentient.
17/ However, Suleyman also describes our work on moral consideration for near-future AI as “premature, and frankly dangerous,” implying that we should consider and mitigate over-attribution risks but not under-attribution risks at present. Here we disagree.
18/ If there are risks in both directions, then we should consider them both, not consider one while neglecting the other. And even if the risk of under-attribution is low now, it may increase fast. We can, and should, address current problems while preparing for future ones.
The world is rightly condemning the killing of Freya. Freya was a climate refugee, and humans killed her rather than welcoming her into their community.
What we now need to appreciate is that this case is only the tip of the (melting) iceberg. 🧵
As I discuss in my book Saving Animals, Saving Ourselves, humans already kill animals in many cases where our interests appear to conflict, even if the stakes for animals are much higher, and even if other forms of conflict resolution are available. 2/
Additionally, when human and nonhuman interests conflict and violence *is* the only option available, this lack of options is often our fault. When you fail to build capacity to care for other animals, the "necessity" of violence becomes a self-fulfilling prophecy. 3/
Today is the official U.S. release of Saving Animals, Saving Ourselves!
Animals matter for pandemics, climate change, and other catastrophes, and these catastrophes also matter for animals. This book examines the science, ethics, and politics of these relationships. 1/10
COVID-19, floods, fires, and other recent disasters remind us that human and nonhuman fates are linked. Our use of animals contributes to pandemics, climate change, and other threats which, in turn, contribute to biodiversity loss, ecosystem collapse, and nonhuman suffering. 2/10
As a result, I argue that we have a responsibility to include animals in global health and environmental policy, by reducing our use of them as part of our pandemic and climate change mitigation efforts and increasing our support for them as part of our adaptation efforts. 3/10