Okay okay I just saw another "self driving car morality" article and
I can't. I can't take it anymore
THREAD TIME
Context: I've worked professionally on a self driving car product, but the following comments are general, not about the product I worked on
BASICALLY EVERYTHING ANYONE WRITES ABOUT THIS SUBJECT IS NOT JUST WRONG, BUT DESCRIBING AN ENTIRELY ALTERNATE UNIVERSE FROM OUR OWN
For context of the problem, here's Harvard's "Moral Machine", a quiz that shows a scenario & asks you to judge the car's appropriate action
People have become captivated by the idea that self driving cars must actually enact the old "trolley problem" from philosophy 101
For example, in that screenshot, a question is posed: Should a car swerve to prevent a collision, saving occupants but killing pedestrians?
This question is fundamentally broken; the answer is neither yes nor no. The reason is *this is not a choice a self driving car CAN make*
All these thought experiments are based on a misunderstanding about the nature of information available to any self driving car
Self driving cars have sensors and cameras. They can detect obstructions. They can identify *some* objects-- say, as a pedestrian vs a car.
What they *cannot* do is do these things *reliably*. In each case, they know a *probability* an object exists and *probability* what it is.
With time, perhaps cameras and AIs will improve to increase the accuracy of these judgements. *They'll never, even in principle, be perfect*
And given that information will always be imperfect, *no* self driving car will ever make a life or death decision based on that information
Every self driving car, now and forever, will make the *most conservative* choice in every single situation. This is inevitable.
It is inevitable because of the nature of liability. If a car ever does something that *increases* danger, the carmarker is in legal peril
And swerving out of your lane into another one will *always* increase probability of a collision which was *caused by* the car
No carmaker will ever be the one to get sued for hitting an undetected bus because the car swerved to avoid a misdetected cardboard box
So let's look at that harvard trolley problem. The diagram says there's a blockage in the road, and pedestrians in the other lane.
*The car doesn't know that for certain*. Even a mega-AI future car doesn't *KNOW* the obstruction is real, or *KNOW* the pedestrian count.
The car also cannot, even in principle, know things such as whether there are approaching cars entering that lane ahead or from the side
So what should the car do? No, ignore "should". What the car *will* do is brake to reduce kinetic energy and STAY IN ITS LANE.
In EVERY self driving car "morality" problem, the only possible answer is "brake, and avoid making the problem worse".
Take every one of your diagrams. Throw them out. *The car doesn't have that information*. It will brake, and avoid making the problem worse.
In fact, even braking is something autonomous cars do with hesitance! Consider Tesla's automatic emergency braking: tesla.com/sites/default/…
Tesla describes the goal of this feature as being to "reduce the impact of an unavoidable frontal collision." It does not say "prevent".
Imagine if the feature electively braked to prevent collisions. What if it triggered at the wrong time? It could *cause* an accident.
Instead, the car waits to act until a collision is "unavoidable", and then takes an action that is at that point known to only reduce damage
Future self driving cars will control all braking and therefore will act more proactively. But they'll keep this conservative approach.
Morality diagrams often contrast swerving into a lane, into a cafe, or off a cliff. Nonsense. The car cannot know what's outside its lane!
It can't count how many people are in a cafe, or know what's at the bottom of a cliff. It can't know what will happen if it leaves the lane.
It can't count pedestrians-- not accurately enough for liability. Your trolley-problem analysis is based on information the car *won't have*
These self-driving car diagrams are really questions about how a *human* driver, who incidentally has inhuman omniscience, should act
They are *irrelevant* to self-driving cars or how any car heuristic will ever be designed, and this is obvious with *any* serious thought
Ugggggh twitter messed up the threading after this tweet. The thread continues here:
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
