Profile picture
Elsa Kania @EBKania
, 18 tweets, 5 min read Read on Twitter
Wow, this is fascinating on a number of levels. China reportedly plans to update the computer systems on nuclear submarines with AI to enhance commanders' thinking and decision-making. scmp.com/news/china/soc…
First of all, the fact that that the senior scientist on the program is talking to the South China Morning Post about this is quite notable as an indicator that, despite the sensitivity of the project, the powers that be presumably want this story to receive some attention.
Next, this is telling in terms of how the PLA thinks about the utility of AI. The researcher reportedly highlighted that 'a submarine with AI-augmented brainpower would give the PLA Navy an upper hand in battle and push applications of AI technology to a new level.'
Since AlphaGo defeated Lee Sedol in spring 2016, I've seen writings from PLA strategists highlighting this aspect of AI -- its potential in "intelligentized" (智能化) command decision-making, enabling commanders to achieve decision superiority.
The story highlights that an AI decision-support system with “its own thoughts” would reduce the commanding officers’ workload and mental burden, at a time when demands of modern warfare could undermine judgment.
That the PLA is reportedly developing the capability to leverage AI in decision support for nuclear submarines raises interesting questions in the context of broader debates about the impact of AI on nuclear and strategic stability. (h/t @mchorowitz, @paul_scharre, @Alex_agvg)
It is clear that the PLA is very serious about leveraging AI to enhance its future military power. cnas.org/publications/r… #ChinaAIPower #BattlefieldSingularity
SCMP details plans for AI to take on “thinking” functions on nuclear subs, e.g. interpreting and answering signals picked up by sonar, through the use of convolutional neural networks that "acquire knowledge, improve skills and develop new strategy without human intervention."
The AI system in question is intended to follow and understand underwater operations but be simple enough to reduce the risk of failure. However, will the PLA underestimate these risks and the potential threats to strategic stability? lawfareblog.com/great-power-co…
The researcher in question was quoted as saying “What the military cares most about is not fancy features. What they care most is the thing does not screw up amid the heat of a battle.” But could it?
For more on the history of challenges with automated systems, see this @CNASdc report by Dr. John Hawley on the Patriot air and missile defense system: cnas.org/publications/r…
These issues will only be more acute with an AI system in a nuclear sub, and we've seen cases in which only human intervention and judgment has prevented nuclear war, as with Stanislav Petrov in 1983. thebulletin.org/critical-human…
The researcher emphasizes, “There must be a human hand on every critical post. This is for safety redundancy.” But this will create new challenges for the PLA with new demands on training highly proficient personnel who can understand and operate these complex systems.
I don't expect that the PLA will take the human fully "out of the loop" in these or other AI systems in the near term. The technology simply isn't ready yet. But there is a risk that the PLA may overestimate the capabilities of and rely too heavily upon machine judgement.
With new capabilities come new risks, and U.S.-China strategic competition in AI is intensifying. I expect the U.S. may be pursuing similar capabilities, but could the PLA move more quickly?
According to Zhu Min, a researcher at the Institute of Acoustics with the Chinese Academy of Sciences, “In the past, the technology was too distant from application, but recently a lot of progress has been achieved...There seems to be hope around the corner.”
He then goes on to say, "If the system started to have its own way of thinking, “we may have a runaway submarine with enough nuclear arsenals to destroy a continent.” So, the dystopian tag line for this story could be that the PLA wants to put superintelligence on nuclear subs...
So, in summary, this story is a critical indicator of the direction that the PLA may be taking as it pursues military applications of AI, highlighting its trajectory towards new, perhaps quite disruptive capabilities and also raising real, serious concerns about the risks.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Elsa Kania
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!