Tesla has been promising a truly autonomous driving system for four years (as of next Monday) and I for one am looking forward to moving past the "just imagine" phase and seeing an actual system in action. Enough table talk, it's time to actually show your cards... let's see 'em.
Remember all the hype about "Smart Summon"? The buildup was insane, with Musk claiming that it would "really illustrate the value of having a massive fleet"... and then it came out, the flood of videos ranged from hilarious to terrifying and nobody spoke of it again.
Of course, there is a difference this time: people will be using "Full Self-Driving" at higher speeds and in complex situations to see what it's capable of, and the risk of an injury/death and/or an owner being used as a "moral crumple zone" is correspondingly higher.
But that's the situation that's been created here: since Musk and his chorus of enablers won't engage with substantive criticisms of its approach (which is a dramatic outlier in the AV industry) the only way to prove this is a bad idea is to let them endanger the public.
Over the past 4 years, I've heard far more concerns from AV developers about the possibility that Tesla's approach could prompt a regulatory crackdown than I've heard concerns that Tesla will put them out of business by proving they can do real L4/5 autonomy with ADAS hardware.
Make no mistake: if Tesla can prove it's created a safe, generalizable autonomous drive system with some non-HD cameras and one ADAS-grade radar, every other AV company disappears overnight. If you don't need geofences, HD maps and six-figure sensor suites the AV sector goes poof
Go ask the people working on the problem why they bet on those things. Ask them why they didn't pursue a camera-only approach. Heck, ask Mobileye why they won't deploy a camera-only system even though they've proven a level of camera-only performance that Tesla can only dream of.
It's not because they are dumb, or because it never occurred to them. It's because they are so invested in the technology that they won't risk its long-term prospects by playing fast and loose with safety.
That's why they worry about Tesla: this FSD thing could poison the well.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Imagine this: in the midst of the space race, with the US and USSR pumping billions into moon landing missions, Boeing suddenly claims they could land people on the moon in a modified 707.
That's roughly how Tesla's Full Self-Driving claims come across to AV developers.
AVs are taking longer to reach maturity than a lot of people expected, and that's using a ton of super high-end cameras (>20 in the new Waymo stack) plus 360 degree coverage of short- and long-range lidar, and high-performance radar. Typical sensor suites costs 6 figures per car!
To see Tesla's claims as credible, we're forced to believe that the companies who have been leading in this technology could be using hardware that costs at least an order of magnitude less... but either don't realize it, or aren't smart enough to make it work.
Adversarial attacks are not likely to be common, but the vulnerability shows how how important it is to have diverse and redundant sensor modalities in autonomous vehicles. If you can create a safety risk by fooling one camera, you were in trouble long before the attack.
Tesla's approach to full autonomy is seen as plausible by people who know a little bit about machine learning and absolutely nothing about safety critical systems architecture and functional safety... which is a lot of people these days!
Try doing some basic functional safety thinking yourself: imagine a totally driverless Tesla driving down the road, when something goes wrong. For example, a camera goes out due to humidity or mud. What might happen? Play it out.
Does it matter what you call a Level 2 driver assistance system? A novel study from the AAA Foundation for Traffic Safety shows that it definitely does, further validating the concerns @lizadixon voiced in her influential #Autonowashing paper aaafoundation.org/impact-of-info…
Basically, AAA looked at user mental models and behavior when two groups used the same Level 2 system... with one group they called it DriveAssist and the other they called it AutonoDrive (it was actually SuperCruise lol). The findings were pretty conclusive: names drive behavior
Folks... this is not good. Basically, branding is more powerful than even our own experience using a system. Everyone is going to say "yeah, but I'm not THAT dumb" but scientifically speaking you almost certainly are.
You know how I know Teslas will never be "Full Self-Driving"?
Because the cameras are easily blinded by sun, rain, fog, mud and snow. Even humidity and temperature changes take them out. Also, the radar unit isn't heated so snow and ice can take it out.
This is just scratching the surface, there's an almost endless supply of these reports. Day time, night time, good weather, bad weather. Tesla's hardware suite doesn't have sufficient sensor redundancy/diversity, let alone automated cleaning/heating solutions that real AVs have.
It's kind of adorable when people who subordinate 100% of their critical faculties to blind faith in Elon Musk think they can be effective at persuasion. Like, if I were going to be convinced by his arguments that would have happened when he made them in the first place!
It's also adorable when the fanboys have no idea that their faith puts them at odds with the scientific consensus around autonomous drive technology, to no less of a degree than climate deniers are with climate science. Maybe slightly less so than flat earthers, but not much.
This is the fascinating contradiction at the heart of Musk's appeal: being a fan of his makes people feel smart in the "I effing love science" way, but the relationship he demands (or his community cultivates) is rooted in faith, not critical thought or independent learning.
Wow, this is huge: the safety driver who was behind the wheel the night Elaine Herzberg was hit and killed by an Uber self-driving test vehicle is being charged with negligent homicide. Whichever way this case goes, it's going to set an important precedent.
What makes this case so tough: on the one hand, this safety driver was hired to monitor a development vehicle during testing and ensure its safety... but on the other hand we know that this is an almost impossible task to sustain, and distraction was inevitable.
To flesh out the second half of that: Uber had this safety driver working 10 hours per shift, at night, all alone, with no driver monitoring. There's a good deal of scientific research that suggests this set her up to inevitably fail. More on that here👇