A cornerstone of the ICRC's proposed rules for #LAWS is a ban on "unpredictable weapon systems." i.e. systems whose "effects cannot be sufficiently understood, predicted and explained."

So here's a quick thread🧵 on predictability and understandability.
icrc.org/en/document/ic…
First, what is "Predictability"? Well as it happens, there are different types of predictability.
1. Technical (un)predictability
2. Operational (un)predictability
3. The (un)predictability of effects/outcomes.
They're all different in crucial ways.
Technical (un)predictability is the degree to which a system executes a task with the same performance that it exhibited in testing, in previous applications or (in the case of machine learning systems) on its training data.
(By the way, technical predictability is not the same as reliability. Even exceptionally reliable systems that fail rarely might still occasionally fail in very unpredictable ways because the range of failures that autonomous systems can exhibit is wide.)
Operational (un)predictability, on the other hand, is the degree to which an autonomous system’s individual actions can be anticipated.
For example, you send a drone into a cave - to what extent can you predict every individual move it will make?
While technical predictability is *fairly* measurable, operational unpredictability is rather slippery.
And unavoidable. All fully autonomous system missions are going to have a degree of inherent operational unpredictability. It's not a bug, it's a feature.
The type of unpredictability the ICRC is referring to is the third type: the Unpredictability of Effects/outcomes. This is a function of a system's Technical Predictability *in combination* with the Operational Unpredictability of the environment and mission.
Many factors determine AWS predictability, including:
The type of system
The type of task
The complexity of the environment
The system's capacity for self-learning
The scale and length of the deployment
The number of interacting systems (both friendly and adversarial)
Given all this, the ICRC's proposed restrictions on the types of target, mission duration, geographical scope and scale, and situations of use for LAWS could potentially serve to limit these weapons to situations of greater predictability.
Ok, maybe not exactly a "quick" thread. Are you still with me? OK. Now let's talk about Understandability (or as some folks call it, "explainability").
Like predictability, "Understandability" is a many-faceted concept. There are a ton of factors that determine the degree to which a system can be understood. It's not just about the system's technical complexity!
Crucially, understandability depends on the human subject's capacity for understanding. My capacity for understanding an AWS is different from yours, which is different from that of a machine learning engineer or a cat.
Nor is a human's capacity for understanding AWS solely based on expertise. It depends just as much on:
Their cognitive load
The human-machine interface
Their prior experience with the system
Their level of trust in the system
(and more)
Still breathing? Good.
Now, in addition to the many factors that affect system predictability and understandability, there are a ton of factors that guide the *necessary/appropriate* level of understandability and predictability in any given context.
These include:
The criticality of the mission
The form of human control
The type of adversary
The environment
(and more)
So what?
Well, the point is, all of these things make it exceedingly challenging to *measure* predictability or understandability in any standardized way, or to establish universal thresholds for them. We're talking about lots and lots of "IT DEPENDS."
For example, how do you measure "operational complexity"? How do you measure a human operator's "capacity for understanding"? How do you grade the "type of adversary"? How do you measure the predictability of something that is unpredictable?
This could pose a challenge for the implementation of rules that hinge on the notion of a minimum required level of predictability and understandability.
BUT that's not to say that it's a lost cause. Everyone agrees that some degree of predictability and understandability is essential for the prudent and responsible use of autonomy in conflict. This is actually a rare patch of common ground in an otherwise rather divided debate.
Achieving the necessary standardization is going to require new scientific developments in testing and evaluation, standards-setting, and legal reviews. This stuff ain't easy, but there's a lot of valuable work happening in this space. So there's plenty of grounds for optimism!
Phew...that's it, I'm done. You made it! I hope you thought it was worth it. Now I'm going to go stare into blank space for a while.
You can read more about everything I've mentioned in this thread here:
unidir.org/publication/bl…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Arthur Holland Michel

Arthur Holland Michel Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @WriteArthur

18 Sep 20
Lots to unpack from this major test of a previously very quiet system to automate the "kill chain" leading up to a strike using...yep, you guessed it, Artificial Intelligence. (1/5)
breakingdefense.com/2020/09/kill-c…
2/5 Basically, this technology enables drones to autonomously feed intel directly to algorithms that identify threats and suggest how to destroy those targets. All the humans have to do is review the recommendations and pull the trigger.
3/5 The implications of this automated "kill chain" technology are massive. In another recent test, the Army used a similar system to shorten an artillery kill chain from 10 minutes to just 20 SECONDS. breakingdefense.com/2020/09/target…
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(