Does it matter what you call a Level 2 driver assistance system? A novel study from the AAA Foundation for Traffic Safety shows that it definitely does, further validating the concerns @lizadixon voiced in her influential #Autonowashing paper aaafoundation.org/impact-of-info…
Basically, AAA looked at user mental models and behavior when two groups used the same Level 2 system... with one group they called it DriveAssist and the other they called it AutonoDrive (it was actually SuperCruise lol). The findings were pretty conclusive: names drive behavior
Folks... this is not good. Basically, branding is more powerful than even our own experience using a system. Everyone is going to say "yeah, but I'm not THAT dumb" but scientifically speaking you almost certainly are.
This is one of the reason that pushing software to untrained consumers called "feature complete Full Self-Driving" is a scary prospect we're all going to have to come to terms with soon. With that name, people are going to make massive assumptions about the software's capability.
That assumption of capability means people will engage in riskier behavior than they would have otherwise. Worse, as they use the system those riskier behaviors are likely to increase and not decrease (to be fair, this may not be the case if FSD is less reliable than SuperCruise)
Tesla's communications, particularly the CEO quotes/tweets that get the most attention, always emphasize the capabilities of Autopilot/FSD and not its limitations. That's happening even more with FSD, which hasn't even been released yet. Expectations are sky-high, so risk is too.
Since someone will inevitably attack the author (that's like 80% of the Tesla stan playbook), it's worth noting that the paper builds on past research into name & user expectation/safety in ADAS. Specifically:
AAA, 2019
Abraham, Seppelt et al., 2017
Nees, 2018
Teoh, 2020

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with E.W. Niedermeyer

E.W. Niedermeyer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Tweetermeyer

13 Oct
Imagine this: in the midst of the space race, with the US and USSR pumping billions into moon landing missions, Boeing suddenly claims they could land people on the moon in a modified 707.

That's roughly how Tesla's Full Self-Driving claims come across to AV developers.
AVs are taking longer to reach maturity than a lot of people expected, and that's using a ton of super high-end cameras (>20 in the new Waymo stack) plus 360 degree coverage of short- and long-range lidar, and high-performance radar. Typical sensor suites costs 6 figures per car!
To see Tesla's claims as credible, we're forced to believe that the companies who have been leading in this technology could be using hardware that costs at least an order of magnitude less... but either don't realize it, or aren't smart enough to make it work.
Read 18 tweets
13 Oct
Tesla has been promising a truly autonomous driving system for four years (as of next Monday) and I for one am looking forward to moving past the "just imagine" phase and seeing an actual system in action. Enough table talk, it's time to actually show your cards... let's see 'em.
Remember all the hype about "Smart Summon"? The buildup was insane, with Musk claiming that it would "really illustrate the value of having a massive fleet"... and then it came out, the flood of videos ranged from hilarious to terrifying and nobody spoke of it again. Image
Of course, there is a difference this time: people will be using "Full Self-Driving" at higher speeds and in complex situations to see what it's capable of, and the risk of an injury/death and/or an owner being used as a "moral crumple zone" is correspondingly higher.
Read 8 tweets
13 Oct
Adversarial attacks are not likely to be common, but the vulnerability shows how how important it is to have diverse and redundant sensor modalities in autonomous vehicles. If you can create a safety risk by fooling one camera, you were in trouble long before the attack.
Tesla's approach to full autonomy is seen as plausible by people who know a little bit about machine learning and absolutely nothing about safety critical systems architecture and functional safety... which is a lot of people these days!
Try doing some basic functional safety thinking yourself: imagine a totally driverless Tesla driving down the road, when something goes wrong. For example, a camera goes out due to humidity or mud. What might happen? Play it out.
Read 4 tweets
10 Oct
You know how I know Teslas will never be "Full Self-Driving"?

Because the cameras are easily blinded by sun, rain, fog, mud and snow. Even humidity and temperature changes take them out. Also, the radar unit isn't heated so snow and ice can take it out.

Level 5 secured.
This is just scratching the surface, there's an almost endless supply of these reports. Day time, night time, good weather, bad weather. Tesla's hardware suite doesn't have sufficient sensor redundancy/diversity, let alone automated cleaning/heating solutions that real AVs have.
Read 5 tweets
9 Oct
It's kind of adorable when people who subordinate 100% of their critical faculties to blind faith in Elon Musk think they can be effective at persuasion. Like, if I were going to be convinced by his arguments that would have happened when he made them in the first place!
It's also adorable when the fanboys have no idea that their faith puts them at odds with the scientific consensus around autonomous drive technology, to no less of a degree than climate deniers are with climate science. Maybe slightly less so than flat earthers, but not much.
This is the fascinating contradiction at the heart of Musk's appeal: being a fan of his makes people feel smart in the "I effing love science" way, but the relationship he demands (or his community cultivates) is rooted in faith, not critical thought or independent learning.
Read 4 tweets
15 Sep
Wow, this is huge: the safety driver who was behind the wheel the night Elaine Herzberg was hit and killed by an Uber self-driving test vehicle is being charged with negligent homicide. Whichever way this case goes, it's going to set an important precedent.
What makes this case so tough: on the one hand, this safety driver was hired to monitor a development vehicle during testing and ensure its safety... but on the other hand we know that this is an almost impossible task to sustain, and distraction was inevitable.
To flesh out the second half of that: Uber had this safety driver working 10 hours per shift, at night, all alone, with no driver monitoring. There's a good deal of scientific research that suggests this set her up to inevitably fail. More on that here👇
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!