Profile picture
Elsa B. Kania @EBKania
, 22 tweets, 5 min read Read on Twitter
ICYMI, China has posted a working paper for the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) #CCWUN…
Here are a few initial reflections. First, it's encouraging that China is actively participating in this process, and I hope China (and Russia) will remain engaged on the legal and ethical issues underlying their development of military applications of AI.
At the same time, it's important to note that the Chinese military and defense industry are actively engaged in research and development of - and experimentation with - a range of AI-enabled capabilities, including swarm intelligence, as I've documented.…
This is hardly surprising - just about every major military is pursuing different applications of AI - but great power competition in this domain does pose a range of risks to military and strategic stability.…
At first glance, China's position paper itself seems to seek to preserve a degree of ambiguity and optionality, while expressing outright skepticism about any "uniform standard" on these issues. That doesn't surprise me at all.
The paper notes that LAWS "are closely related to existing weapons and
new weapon systems that are being developed" and "still lack a clear and agreed definition," but "should be understood as fully autonomous lethal weapon systems."
It's worth noting that the PLA's official dictionary included a definition for artificial intelligence weapon (人工智能武器) as early as *2011*, though presumably PLA thinking has continued to evolve as the technology has advanced.
"a weapon that utilizes AI to automatically (自动) pursue, distinguish, & destroy enemy targets; often composed of information collection & management systems, knowledge base systems, assistance to decision systems, mission implementation systems, etc.,” e.g., military robotics
There may be a major divide between China's diplomatic engagement on these issues and the PLA's approach. The PLA doesn't have a legal culture comparable to the U.S. military's, e.g., due to its lack of experience with application of laws of armed conflict or rules of engagement.
Traditionally, the PLA has also approached issues of international law in terms of legal warfare (法律战), seeking to exploit rather than be constrained by legal frameworks.
See, for instance, Dean Cheng's report on legal warfare, which argues that China approaches lawfare "as an offensive weapon capable of hamstringing opponents and seizing the political initiative":…
I've also written on the PLA's approach to the "three warfares" based on authoritative publications that focus on concepts such as seizing “legal principle superiority” (法理优势) or delegitimizing an adversary with "restriction through law" (法律制约).…
Back to the paper, which has a very specific definition of LAWS, including the characteristics of "impossibility for termination" and "indiscriminate effect, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios and targets."
That allows for a lot of leeway, it seems. Would an intelligent/autonomous weapons system that can be terminated and is not indiscriminate be seen as not at all problematic from this perspective?
The paper highlights Human-Machine Interaction as "conducive to the prevention of indiscriminate killing and maiming...caused by breakaway from human control." The PLA will likely care a lot about security and controllability due to core aspects of its command culture.
The paper does articulate concern for the capability of LAWS in "effectively distinguishing between soldiers and civilians," calling on "all countries to exercise precaution, and to refrain, in particular, from any indiscriminate use against civilians."
Again, that statement may be consistent with the arguments of those seeking to "ban killer robots," but it doesn't articulate any commitment to caution in developing capabilities that can exercise that sort of distinction.
At the same time, China's position paper emphasizes the importance of AI to development and argues, "there should not be any pre-set premises or prejudged outcome which may impede the development of AI technology." That sounds reasonable, given the nascency of these technologies.
It proceeds to highlight that national reviews on 'new weapons'
have shown "positive significance on preventing the misuse of relevant
technologies and on reducing harm to civilians." That AI may allow for greater distinction and proportionality is also a very valid point.
For comparison, see China's December 2016 position paper for that UN GGE, which is much less detailed for the most part, with one important exception:…
The December 2016 paper declared: "China supports the development of a legally binding protocol on issues related to the use of LAWS, similar to the Protocol on Blinding Laser Weapons, to fill the legal gap" on LAWS.
This April 2017 position paper doesn't call for such a "legally binding protocol" but merely calls for "full consideration of the applicability of general legal norms to LAWS." So there has been a notable shift.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Elsa B. Kania
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!