In light of Tesla's so-called "Full Self-Driving" (an inaccurate, dangerous name) release and the anniversary of Elaine Herzberg being killed by a negligent Uber AV test and chaperone driver, I'm re-upping my #AV talk. I'll follow up with some highlights.
(The YouTube link is the raw video, and you can find the slides at walkandbike.com/archives/188. When I'm able, I'll add the slides side-by-side with the video of the talk, but for now, this will have to do.)
In the large-scale MIT study on the "Moral Machine" of a simulated #AV trolley problem, they frame the hypothetical around "sudden brake failure" and an "unavoidable accident", which I argue is a scenario that ignores the purported safety promise of AV *and* bad language/framing
And upstream questions that should be answered about #AV. For example, a moral decision not addressed in the trolley problem: Should AV vehicles be traveling fast enough in the vicinity of pedestrian crossings that they would kill people in the case of a crash?
These language and framing issues are rampant among both media and professionals talking about #AV, and I give some egregious (but not rare) examples. Basically, people outside the vehicle are a hindrance, and perhaps, even an antagonist to #AV (and thus to Progress?).
In the case of the Uber crash that killed Elaine Herzberg, we now know that Uber made deliberate choices (e.g. turning off Volvo's safety systems) *AND* the chaperone driver was streaming an episode of TV, all which likely contributed to Herzberg's preventable death. But
the initial narratives (which often become the lasting ones) were victim-blaming - "she stepped suddenly out" and "she was outside a crosswalk". Yet, in addition to the Uber and driver actions that contributed, a look at the location of the crash implicated the built environment.
I cover a variety of issues briefly in the talk, including some research findings that were then prelim and are now published (see: walkandbike.com/archives/180).