Just wanted to add a little bit of context to this, to help make sense of what Shashua was saying. I'm hoping to go into a little more depth with him on this in a future episode of @TheAutonocast, but for now you can watch the whole thing here:
The question here is: what is the right way to approach a massively complex machine learning problem with an unforgiving standard for reliability/accuracy? If you have access to fleet data, do you use it to find and train for edge cases (Tesla) or build maps (Mobileye)?
The core of Shashua's argument, as I understand it, is that using fleet data to identify and train for edge cases is of limited utility because you ultimately run into overfitting. Training for specific cases makes your solution less generalizable. en.wikipedia.org/wiki/Overfitti…
The alternative, using the same fleet data to create "maps," approaches the same problem from a different direction: instead of tweaking the algos in a million different directions, you reduce the problem space by creating a semantically segmented "map."
So, instead of having to run inference on EVERYTHING the vehicle sees you have a fully automated, crowdsourced map that creates validated certainty about fixed features which reduces the number of things you have to identify/classify/predict, and thus reducing failure rates.
I certainly can't tell you that one of these approaches is definitely better than the other, but on a philosophical level the Mobileye approach seems built on an acceptance of the fundamental limitations/challenges of probabilistic systems in safety-critical applications.
ML tends to be a "90-10" (or even "99-1") solution: it's relatively easy to solve 90% of a problem with ML, but it gets dramatically harder as you approach 100%. This fundamentally suggests that reducing the problem area is likely to work better than endlessly tweaking algos.
This is a super high-level explanation, and it's just my understanding right now. As I mentioned, I look forward to focusing on this issue more in a future conversation with Shashua because I think it gets to the heart of some thorny but important issues in AV development.
Final note, for now: the comments on that InsideEVs story dramatically illustrate the challenges we face in public education around AV technology, and why I really need to get off Twitter and do my damn job already!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I'm so old, I remember when @montana_skeptic's oil/gas investments (part of a diversified family office portfolio he managed for his employer) was touted as proof positive that he was an utterly untrustworthy shill for planet-destroying fossil fuels. 🤔 electrek.co/2018/07/24/tes…
Like the Sandy Munro thing, the most galling part of this is not Musk's/Munro's behavior, but the shameless hypocrisy of his fanbase.
It just goes to show that, behind the facade of smarmy self-righteousness, the culture emanating from Tesla is utterly ruthless and unprincipled
Waymo CEO @johnkrafcik says Tesla is "absolutely not a competitor" to his autonomous drive technology company.
I talk to a lot of AV developers on a regular basis, and I can't think of a single one who does think of Tesla as a competitor. manager-magazin.de/unternehmen/au…
The response that "Teslas can self-drive everywhere and Waymo can't" based on FSD beta footage is laughable. In case you haven't seen this, here is a playlist of Waymo driving 1,000 miles across California without interventions... in 2009! youtube.com/playlist?list=…
If some long drives without interventions meant a company was on the cusp of real driverless L5 autonomy, Waymo would have got there a long time ago. It's just not the case: the more "nines" of reliability you stack up, the harder it gets. That's why AVs are later than predicted.
If you haven't been to the Rouge, and won't be able to make it any time soon due to limited tour capacities, you can check out my write-up from the 2019 Magical Mystery Plant Tour: thedrive.com/tech/26376/mag…
I'm still amazed that the Magical Mystery Plant Tour happened. After Tesla started building plants in a tent, @BertelSchmitt and I put together a 17-day round-the-world tour of 7 auto production locations and a whole group of Twitter folks shelled out thousands to come along.
It was an amazing, exhausting trip from Detroit to Japan to China to Germany to the UK, with an amazing group of individuals from around the world (coming from Brazil, Malaysia, Austria, Florida and beyond). We saw steel, alu, and CFRP made into trucks, sports cars and taxis.
Interesting: eMMC failures in Tesla's touchscreen seem to create a whole host of downstream issues, causing NHTSA to upgrade its investigation. Perhaps running everything through a single screen isn't always advantageous?
"Tesla provided ODI with 2,399 complaints and field reports, 7,777 warranty claims, and 4,746 non-warranty claims related to MCU replacements. The data show failure rates over 30 percent in certain build months and accelerating failure trends after 3 to 4 years-in-service."
You can read the latest ODI Resume, which upgraded the previous Preliminary Evaluation to an Engineering Analysis here [all links to PDF]: static.nhtsa.gov/odi/inv/2020/I…
"Autonomy Day" 2019, Tesla influencers claim to have had "Full Self-Driving" rides but when asked by regulators why autonomous miles weren't logged Tesla clarifies that "Full Self-Driving" is actually a Level 2 driver assistance system
Last night it happened again: the influencers claimed "Full Self-Driving is here" only for NHTSA to release a statement saying they were briefed by Tesla who apparently told them that "Full Self-Driving" is in fact a driver assistance system reuters.com/article/us-tes…
For what it's worth, the SAE guidelines for public road testing (J3018) say that prototype automated driving systems should be classified based on their intended automation level when complete. Needless to say, it's unlikely SAE foresaw consumers testing!
Still trying to wrap my brain around the fact that Elon Musk said he wouldn't put lidar in Teslas even if it were free.
Seriously, the only real problem with lidar is that it's expensive. Otherwise it's an extremely powerful sensor modality that compliments vision and radar really well, and more modalities basically always means more safety. If there's some other downside, nobody's mentioned it.
What it comes down to is: the question was hypothetical and Tesla won't have lidar in its vehicles, so Musk has to talk it down. If that hurts his credibility with people who actually understand AV technology it's no huge loss... they didn't buy his FSD narrative anyway.