Autonowashing makes a system appear to be more autonomous than it really is.
It is a misrepresentation of the appropriate level of human interaction required to operate a system safely.
This word is currently most often used in association with partial (L2) driving automation.
1️⃣ Why does it happen?
Well, there’s no good reason to confuse people about that capabilities of automation and their role in using it…especially safety-critical systems!
At its original source, autonowashing is a form disinformation and it is, in a sense, viral.
2️⃣ What does it look like?
Autonowashing occurs on a spectrum. It can be subtle or egregious & objectively present. Ex:
- Headlines referring to partial automation as “self-driving”
- Influencers modeling system misuse
- Product names “misaligned w/ engineering reality” (FSD)
🔬Quick HMI Lesson🤓
To support human-automation interaction in safety critical systems, user trust in the sys must be proportionate to the system’s capabilities. This inspires appropriate reliance. See Fig 1. from my paper
👎Distrust = disuse (loss of potential safety benefit, maker ROI)
👍Calibrated trust = ideal (this is what we engineer for!)
In memespeak:
This is why very capable automation which *still requires human supervision* is particularly dangerous—this is how complacency sets in. Even expert, highly experienced users are at risk!
“Complacency” is a part of the human condition, and a recognized human factors limitation.
3️⃣ What are the effects?
Autonowashing leads to increased trust which leads to an increased risk of misuse (overreliance) & therefore an increase risk of crashing.
@NTSB: cites overreliance as a factor in multiple deadly crashes involving L2 automation
Autonowashing is a fact, there is no debate over its existence. What constitutes autonowashing can be subjective, but we know it is a phenomenon which is occurring daily and the visibility of its negative effects are, unfortunately, growing.
Context: At the time I had just finished my Masters, during which I conducted a study of trust in ADAS.
I was shocked. I could not believe the things the drivers believed about this technology & I saw first hand how these beliefs translated into [bad, dangerous] behaviors.
I was witnessing a huge gap between the science, media/industry, & the public.
Driving is the most dangerous thing the avg person does & autonowashing unnecessarily contributes to the already risky public roadway environment (ironically often under the guise of improved safety)
Others had been discussing the concept of autonowashing (not the word) & writing articles ab it long before me. But it felt stuck…it wasn’t being taken seriously & the convo needed to evolve. But how do we evolve?
I decided to write a conceptual article ab this prob. I hoped that if I could make the case for autonowashing by passing a scientific peer review, the topic would be taken more seriously. After 2 review cycles, my paper was accepted by @Transport_ELS ✨
*Disclaimer* I wrote this paper before I ever worked in the automotive industry. I have never shorted any stocks.
In fact, this paper is what brought me into the automotive industry and helped align me with the right people and companies who respect HMI research & safety 🙏🏻
5️⃣ How can we stop/mitigate autonowashing?
👨🏫 Education & awareness (@PAVECampaign)
👩🏻🔧 Engineering (ODD limitations, mandatory robust DMS solutions)
🔤 Common language (driver assistance/support systems vs. autonomous systems)
👮♂️ Wake up @FTC@NHTSAgov
🤔 Stay curious, ask Qs!
6️⃣Has it made a difference?
I've been told the needle has moved. In some cases, I've witnessed a positive impact directly, ex: @Cadillac pulling an ad which was called out as “autonowashing”, or Waymo’s @LTADpartners changing their name to distance themselves from autonowashing
Some other things that have happened the past year:
We will see what the future holds! My hope for the next phase of the journey is that my personal association with autonowashing will drop off…sort of like this. And that it will carry on, further into the public domain and into the mainstream.
On Twitter alone autonowashing has reached millions of people. I didn't do that. A small village ppl have not only supported & educated me, but believe in the importance of this message & pushing it forward did that.
The deployment of driving automation is being closely watched by big tech. Via a lack of regulation surrounding new, safety-critical technologies like this & a lack of action when deficiencies are noted, we are setting dangerous precedents for all kinds of future tech !👀
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Autonowashing is a concern for us all, because its consequences have the potential to effect us all.
Those who want to see driving automation advance & succeed—especially, and no matter what companies you root for—have an interest in speaking out against this issue.
Autonowashing is *not* limited to any one entity. This problem is rampant across the industry.
Tesla is discussed in relation to autonowashing, proportionately, as they continue to do the most obvious autonowashing of any OEM.
Plastics are a prob. Dead animals are often found w/ plastic waste in their stomaches. Plastics also breakdown into invisible micro-particles which we then might consume. Emerging research on the effects of microplastics on our health doesn't look good.
.@EuroNCAP has announced its new Assisted Driving Grading system which takes a holistic approach to sys evaluation by including "Driver Engagement" in its rating, to "help consumers" &to "compare assistance performance @ the highest level."
This is a win for human-automation interaction/HMI researchers who have been working for decades to explain how important teaming is and the consequences of broken control loops.
This is a win against #autonowashing, and ultimately a big win for consumer transparency & safety!
Further, @EuroNCAP also released the results of their 2020 Assisted Driving Tests with the new grading system and gave ten different ADAS systems a rating:
We have different ideas about how to “solve” for L5, and various teams are all taking shots at it. In recent years, two schools of thought have emerged about how to approach solving this problem.
🧵👇
For some it is either:
1) a fundamental AI problem which needs a new approach 2) a data problem, which can be solved by more data & more simulation
Some see the greatest challenge as developing the right AI approach.
Others believe that they already have the right approach, and therefore the challenge is acquiring more (and the right) data and doing more training.
Imo, there is some truth in both schools of thought.
As someone living through this pandemic who happens to study trust, human vigilance & behavior––it's entirely unsurprising that we're unable to maintain adequate COVID19 prevention measures. It's Psych 101 and it is why it's so important to have policies to help keep us in line.
It's the same with other safety-critical things (ex. vehicle automation); we need bounding boxes to keep us safe.
Human vigilance is like a muscle, but instead of getting stronger with use (+ as long as nothing bad happens) it weakens over time.
Therefore, what has been the most disturbing is watching the trust get knocked out of the policies we need to protect us, via their politicization and constant reversal and reimplementation.
This is an academic, @TheOfficialACM conference focused on interfaces and interactions in automotive applications. As you can imagine, a great focus is placed on vehicle automation