We work on reducing extreme risks from transformative technologies.
Press Enquiries - georgiana@futureoflife.org
RT /=/ endorsement
Mar 22 • 17 tweets • 8 min read
One year ago today we released an open letter calling for a six-month pause on giant AI experiments.
A lot has happened since then - far more than we could have predicted.
🧵 Some highlights from the past year of unprecedented momentum and progress addressing risks from AI 👇
The EU AI Act passed.
Following years of work by 🇪🇺 lawmakers, CSOs, AI experts & many others, this comprehensive, landmark set of laws has set a precedent for governments around the world to follow - prioritizing public safety and responsible innovation over corporate profits.
Mar 29, 2023 • 8 tweets • 5 min read
📢 We're calling on AI labs to temporarily pause training powerful models!
A short 🧵on why we're calling for this - (1/8)
With more data and compute, the capabilities of AI systems are scaling rapidly.
The largest models are increasingly capable of surpassing human performance across many domains.
No single company can forecast what this means for our societies. (2/8)
Aug 24, 2022 • 5 tweets • 3 min read
Tensions between China and the US have flared up again over the past few weeks.
Are these states paying enough attention to the likely consequences of a conflict? (1/5)
on.ft.com/3PLbtQV
A recent war-game conducted by the @CNASdc demonstrated just how rapidly a conventional conflict between the US and China could escalate into a nuclear war. (2/5)
Hear their story👇
All eight of these heroes win the Future of Life Award for their roles in discovering and popularising nuclear winter.
We hope that drawing attention to this work will help to refocus attention on nuclear weapons, as governments meet to at the #NPTRevCon this month. (2/6)
Mar 28, 2022 • 5 tweets • 3 min read
It seems harder than ever to retain positive visions for our world's future.
But this is precisely why it matters more than ever that we create and share these visions.
The FLI Worldbuilding Contest deadline is in less than 3 weeks, on April 15th. (1/5)
worldbuild.ai
The contest welcomes worldbuilds that are aspirational. But when we specified that the worldbuilds must be positive, we were under no illusions about how unlikely it seems, today, that in 20 years' time, things will somehow be any better.
Yet this can be your motivation. (2/5)
Mar 10, 2022 • 6 tweets • 2 min read
We have posted a lot in the past about nuclear close-calls, when a misunderstanding, a malfunction or a misreading brought us to the brink of a nuclear catastrophe. Many of these near-misses occurred at times of heightened tension, times like now. (1/6)
futureoflife.org/background/nuc…
Take the Suez Crisis, 1956. British and French forces had attacked Egypt at the Suez Canal.Soviet leaders had proposed combining forces with the U.S. to stop these smaller powers, even warning London and Paris that conventional missiles were now pointing at them.(2/6)
Mar 9, 2022 • 11 tweets • 4 min read
After the USSR collapsed, the fear of nuclear war faded to something of a distant memory in the public consciousness.
The return of nukes to headlines may have come as something of a shock. Here are some things worth remembering, as we come back to terms with this threat. (1/11)
Firstly, when we think of Cold War paranoia, we usually imagine this boiled down to fearing, 'what if they get us before we get them?'. In other words, there was a presumption that if one's country could only pre-empt the enemy's strike, then the issue would be solved. (2/11)
Nov 30, 2021 • 6 tweets • 3 min read
In this @axios article, @bryanrwalsh stated, 'the AI military race has begun'. This is potentially disastrous, for three main reasons. (1/6)
axios.com/ai-future-unit…1. An arms race increases the risk of conflict escalation between military powers. US officials are clear about their push for 'superiority', to stay ahead of the Chinese. Meanwhile, Putin says, 'whoever becomes the leader in this sphere will become the ruler of the world'. (2/6)
Nov 29, 2021 • 4 tweets • 2 min read
In theory, a single person could activate (many) thousands of Slaughterbots; the swarm could target individuals, the structural beams of skyscrapers in dense cities, or perhaps a research lab handling deadly viruses.
Slaughterbots have the potential for mass destruction. (1/4)
Basic drone swarms are already here. Israel put one into action in June, using an AI-guided drone swarm to find, select, and attack Hamas militants in Gaza. (2/4)
Criminals and terrorists can already get their hands on semi-autonomous drones: Isis used them, and just the other week, Iraqi PM al-Kadhimi only just survived an attempted drone assassination. Slaughterbots, autonomous and cheap, will be the next must-have for terrorists. (1/4)
Last week, in Liverpool, a terrorist planted explosives in a taxi near a city hospital. The explosives being clunky and hard to move around, he killed only himself. But fast forward a few years, and he'd have been releasing a case of Slaughterbots. (2/4)
One of the main risks of slaughterbots is that they will proliferate. They could become the next AK-47, which was designed for the Soviet Army, but of which there are now 75 million in circulation, used by militias, terrorists, criminals, civilians - it knows no borders. (1/9)
The AK-47 is not only famously reliable; it is also notoriously easy-to-use - kids can disassemble and reassemble it in 30 seconds flat. Its usage has long since extended beyond that of well-trained government forces. (2/9)
Nov 23, 2021 • 6 tweets • 2 min read
Three costs that can discourage states from waging war are the manpower expenditure, the economic fee, and what might be called the 'conscience burden.'
Fully autonomous drones lower all three of these so-called 'barriers to conflict'. (1/6)
In modern times we have seen drone warfare lower the first cost - that of a nation's soldiers - with military powers justifying long campaigns abroad by the fact that there aren't 'boots on the ground'. Slaughterbots would require still fewer personnel. (2/6)