Imagine this: in the midst of the space race, with the US and USSR pumping billions into moon landing missions, Boeing suddenly claims they could land people on the moon in a modified 707.

That's roughly how Tesla's Full Self-Driving claims come across to AV developers.
AVs are taking longer to reach maturity than a lot of people expected, and that's using a ton of super high-end cameras (>20 in the new Waymo stack) plus 360 degree coverage of short- and long-range lidar, and high-performance radar. Typical sensor suites costs 6 figures per car!
To see Tesla's claims as credible, we're forced to believe that the companies who have been leading in this technology could be using hardware that costs at least an order of magnitude less... but either don't realize it, or aren't smart enough to make it work.
The leaders of these other companies were almost all pioneers of modern AV tech who broke new ground during the DARPA challenges. Some of them have already shipped autonomous products, like the Caterpillar mining trucks Argo's Bryan Salesky developed post-gazette.com/business/tech-…
The question I can't get a straight answer to is: why would these people be developing a (metaphorical) spaceship if an airplane could get them to the moon? Why do all of the pioneers in AV technology agree that it requires sensors and features that add huge expense and dev time?
Equally puzzling: why aren't there more companies pursuing L5 autonomous systems that use such low-cost hardware? They exist, Wayve being one example... but they aren't getting the big investments, or making groundbreaking demonstrations. Why not? venturebeat.com/2019/11/17/way…
Mobileye, the leader in computer vision for automated driving, has been working this problem for over 20 years, collects 3.7m miles of sensor data every day, and can demonstrate impressive camera-only AV performance... but says safety demands radar/lidar. arstechnica.com/cars/2020/01/i…
Why would the leader in camera-only driving automation say it won't put a camera-only AV into production? The answer can be found in Intel/Mobileye's work to create a safety case for their vehicles, which you can find (in white paper form) here [PDF]: arxiv.org/pdf/1708.06374…
Here is a selection from that white paper that give you a sense of what Intel/Mobileye is trying to accomplish with their AV development: they are balancing safety and scalability (i.e. cost) but they need to validate to 3 orders of magnitude better safety than human drivers. ImageImage
Note: this isn't some regulatory standard, as no regulator has promulgated such an "incident per miles driven" standard. This is the level of safety they think they need to achieve in order to avoid risking public acceptance of autonomous vehicles. This is crucial!
To prove that your system actually meets a given safety threshold, the miles over which you validate have to be representative of the operational domain. That's possible with L3/4, but how do you ensure that a L5 system has been validated in any/all conditions it might see?
This is a critical question, because an AV system can't actually be used properly (without an occupant, for example) until you can validate its safety in a domain-representative way. You can't just drive around the block a billion times! Level 5 makes that part muuuch harder.
Until you validate safety across the domain, "safety drivers" will be the only thing ensuring safety. As the thread linked here shows, AV developers use highly-trained teams of at least two with rotating responsibilities to prevent inevitable complacency.
Using far less-capable hardware, less (or in some cases, no) sensor redundancy, no HD maps and unvalidated software with only untrained consumers as a safety net suggests that the key advantage lies not in Tesla's technology but in its risk tolerance. See: finance.yahoo.com/news/happened-…
The critical point: if this wild gamble doesn't pay off, it could profoundly affect public perceptions of autonomous drive technology. This isn't just a bet on a theory or a stock, it's gambling with human lives and the future of the most important technology of this century.
That's why I want to make sure that we're going into this with our eyes open: it's super important that public understand that Tesla's approach is wildly divergent from the industry consensus, on multiple levels. Otherwise, the entire sector will pay for risks they refuse to take
We've already seen this happen: a @faizsays story about the "techlash" and AVs showed that even though public concerns about tech culture apply more to Tesla's approach, they are identified with "real AVs" made by firms with strong safety cultures. thedrive.com/tech/30151/sil…
So forgive me in advance for tweeting a lot about this issue over the coming weeks and months. I genuinely hope the rollout of "Full Self-Driving" is safe, but if it's not we're going to have to work hard to make sure that the takeaway isn't "all AVs are unsafe/doomed" #NotAllAVs

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with E.W. Niedermeyer

E.W. Niedermeyer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Tweetermeyer

13 Oct
Tesla has been promising a truly autonomous driving system for four years (as of next Monday) and I for one am looking forward to moving past the "just imagine" phase and seeing an actual system in action. Enough table talk, it's time to actually show your cards... let's see 'em.
Remember all the hype about "Smart Summon"? The buildup was insane, with Musk claiming that it would "really illustrate the value of having a massive fleet"... and then it came out, the flood of videos ranged from hilarious to terrifying and nobody spoke of it again.
Of course, there is a difference this time: people will be using "Full Self-Driving" at higher speeds and in complex situations to see what it's capable of, and the risk of an injury/death and/or an owner being used as a "moral crumple zone" is correspondingly higher.
Read 8 tweets
13 Oct
Adversarial attacks are not likely to be common, but the vulnerability shows how how important it is to have diverse and redundant sensor modalities in autonomous vehicles. If you can create a safety risk by fooling one camera, you were in trouble long before the attack.
Tesla's approach to full autonomy is seen as plausible by people who know a little bit about machine learning and absolutely nothing about safety critical systems architecture and functional safety... which is a lot of people these days!
Try doing some basic functional safety thinking yourself: imagine a totally driverless Tesla driving down the road, when something goes wrong. For example, a camera goes out due to humidity or mud. What might happen? Play it out.
Read 4 tweets
11 Oct
Does it matter what you call a Level 2 driver assistance system? A novel study from the AAA Foundation for Traffic Safety shows that it definitely does, further validating the concerns @lizadixon voiced in her influential #Autonowashing paper aaafoundation.org/impact-of-info…
Basically, AAA looked at user mental models and behavior when two groups used the same Level 2 system... with one group they called it DriveAssist and the other they called it AutonoDrive (it was actually SuperCruise lol). The findings were pretty conclusive: names drive behavior
Folks... this is not good. Basically, branding is more powerful than even our own experience using a system. Everyone is going to say "yeah, but I'm not THAT dumb" but scientifically speaking you almost certainly are.
Read 7 tweets
10 Oct
You know how I know Teslas will never be "Full Self-Driving"?

Because the cameras are easily blinded by sun, rain, fog, mud and snow. Even humidity and temperature changes take them out. Also, the radar unit isn't heated so snow and ice can take it out.

Level 5 secured.
This is just scratching the surface, there's an almost endless supply of these reports. Day time, night time, good weather, bad weather. Tesla's hardware suite doesn't have sufficient sensor redundancy/diversity, let alone automated cleaning/heating solutions that real AVs have.
Read 5 tweets
9 Oct
It's kind of adorable when people who subordinate 100% of their critical faculties to blind faith in Elon Musk think they can be effective at persuasion. Like, if I were going to be convinced by his arguments that would have happened when he made them in the first place!
It's also adorable when the fanboys have no idea that their faith puts them at odds with the scientific consensus around autonomous drive technology, to no less of a degree than climate deniers are with climate science. Maybe slightly less so than flat earthers, but not much.
This is the fascinating contradiction at the heart of Musk's appeal: being a fan of his makes people feel smart in the "I effing love science" way, but the relationship he demands (or his community cultivates) is rooted in faith, not critical thought or independent learning.
Read 4 tweets
15 Sep
Wow, this is huge: the safety driver who was behind the wheel the night Elaine Herzberg was hit and killed by an Uber self-driving test vehicle is being charged with negligent homicide. Whichever way this case goes, it's going to set an important precedent.
What makes this case so tough: on the one hand, this safety driver was hired to monitor a development vehicle during testing and ensure its safety... but on the other hand we know that this is an almost impossible task to sustain, and distraction was inevitable.
To flesh out the second half of that: Uber had this safety driver working 10 hours per shift, at night, all alone, with no driver monitoring. There's a good deal of scientific research that suggests this set her up to inevitably fail. More on that here👇
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!