A quick (and unsettling) threat how AI is already shaping the future of ground & drone based combat. A few days back the US Army fully tested its "Project Convergence 2020", an AI enabled quasi automatic from sensor to shooter toolchain with an engagement time of just seconds /1
I recommend reading the news article (source at bottom) because it simply shows how mil. decision makers want AI to collect the data, decide over target & select the impact measure. The operator still has to click "OK" but thats not "ethical use of AI". /2
At the point when a human sees the processed data and has to decide if/if not to pull the trigger, the machine already has filtered out so many information and pre-decided over target, tactics, situational conditions etc. a human simply cannot trace back / evaluate /3
In addition, the operator sees an "overhead view of the battlefield populated with both blue and red forces", already a strong abstraction of the battlefield. How should a human detect here wrong targets? /4
And: is it practical or even possible to operate a system that you potentially have to question the whole time? Imagine driving your car and constantly have to worry if speedometer is giving you the right data, or the fuel gauge. We are trained to trust machines. /5
So, the operator simply HAS to take these data, especially under time critical conditions like combat with psychological pressure. And time seems to be the key, as the AI systems are intended to shorter the "sensor to shooter timeline" from minutes to seconds. /6
Moreover, when the human "delay" factor is considered a disadvantage, how can one seriously claim that they want human oversight over decisions and a "human in the loop". This not much more than a fig leaf and will presumably cut out in a later version /7
One last point. They also tested drones "using the on-board Dead Center payload" that "was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification" /8
This means the operator never sees the "full picture", thus reducing his changes to detect wrong calculations of the AI. We need international norms for AI and automization, and we need them fast, as military forces are already two steps ahead of us, creating the facts. /9
Here the source c4isrnet.com/artificial-int…. TL;DR: Keep FIRESTORM, Prometheus and TITAN in your mind, as I guess that we will here of them soon in real life. /10
Thats it, my first twitter short-analysis-threat. I'd be glad if you share it. More question/thougts/criticism/debates via DM if you like. Thanks for reading. /11
CC: @drfranksauer @thomas_wiegold @NiklasSchoernig @adahlma @z_edian @perceptic0n @HonkHase @DrAlwardt @DrUlrichKuehn @RikeFranke @SophieCFischer @CarloMasala1 @M__Verbruggen @KennethGeers @boell_secpol
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
