Bold take from a guy whose "Autopilot" has been identified by the @NTSB as a contributing factor in multiple fatal crashes!
In fact, the Ford system that Elon is bashing here has the very two features that the NTSB determined could have prevented those fatal Autopilot-involved crashes: a camera-based driver monitoring system, and operational design domain limits. Autopilot still doesn't have these!
Here are the NTSB investigations of three fatal Autopilot crashes:

Josh Brown: ntsb.gov/investigations…

Jeremy Banner: ntsb.gov/investigations…

Walter Huang: ntsb.gov/investigations…

Watch the last NTSB hearing on Autopilot safety issues (Feb 2020) here:
I really hope this wasn't intentional. I really, really hope Musk just picked random footage for this "dunk"...

...but at this point, I doubt it.

So gross, on so many levels. That's one hell of a "hero" some of you have chosen...

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with E.W. Niedermeyer

E.W. Niedermeyer Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Tweetermeyer

18 Apr
Two people dead, nobody in the driver seat. 6,000 lb experiments in half-baked, camera-only autonomy, capable of doing 0-60 in under 3 seconds, just roaming the streets. I love living in a SciFi dystopia. click2houston.com/news/local/202…
We don't know what happened here, but we shouldn't be at all surprised that it has happened.

It's not just that a life-and-death experiment is allowed to play out, in the hands of amateurs, on public streets. It's that a chorus of ghouls cheers it to this inevitable conclusion.
Just watch: today the ghouls will all be singing from the second stanza of their hymnbook. Today will be filled with cries of "it's just driver assistance" and "the driver is always responsible."

In a week they will all be back to "it basically drives itself! The future is now!"
Read 7 tweets
17 Apr
Can someone explain how humans might make this uniquely hospitable planet uninhabitable, but somehow not do the exact same thing to far, far less hospitable planets?

Please, game that out for me.
Saw a tweet yesterday comparing SpaceX to the Dutch East India Company... as a good thing.

I had to literally put my phone down. The sheer inaccuracy of the comparison (um, what resources are there to plunder on Mars?) was exceeded only by its historically-illiterate amorality.
There's a real kernel of truth there though: claiming, occupying and despoiling any perceived vacuum within our grasp is a consistent human drive, as is the need to dress it up in the latest flavors. Space gives this drive new scope, while "saving" us from its consequences here.
Read 5 tweets
10 Apr
Critical as I am, I generally give Tesla credit for their powertrains and Superchargers... but this 739-page thread about loss of range and/or charging speed and/or increased "vampire drain" makes me wonder. Some (understandably) angry owners. teslamotorsclub.com/tmc/threads/su…
The guy who has done more to help owners understand these problems than anyone, @wk057, is now being pushed out of the community by TMC mods. I saw the owner-investor war for TMC's soul coming years ago, and unsurprisingly the investors have won.
Here's what the conflict comes down to: owners want an open exchange of information about the products they have spent a lot of money on, while investors want to suppress or spin any information they perceive as being negative for the company. This is a big issue for a forum!
Read 4 tweets
10 Apr
Reminder: radar used to be the "future of Tesla's self-driving Autopilot." If you read the afterword to the paperback edition of Ludicrous: The Unvarnished Story of Tesla Motors you know that Tesla even spent years developing radar in-house inverse.com/article/20833-…
If radar is so useless, why do the early Model Xs have brackets for corner radar? ebay.com/p/700484339

The TMC guys even stumbled onto the fact that Model X was supposed to have corner radar: teslamotorsclub.com/tmc/posts/3223…

Even Elon knows sensor diversity/redundancy matters.
The radar story matters because it reveals that Musk has been fumbling for a strategy even after he started taking customer cash for FSD, he was naive enough to think Tesla could beat Bosch/Conti, and that the north star for this safety-critical tech is cost and not safety.
Read 5 tweets
21 Feb
In 2015, I stumbled onto my first real Tesla story when I found they'd rather hook up Superchargers to diesel generators than make their battery swap station available. I asked their comms department about their emissions claims and the answer shocked me. bloomberg.com/opinion/articl…
Tesla keeps a running count of its "carbon impact" at tesla.com/carbonimpact. How, I asked, did they calculate this number? The answer: they simply assumed that every vehicle had zero direct or indirect emissions. But did they buy zero-emission power for all Superchargers? No.
Go back and look at the claims Tesla has made over the years and you'll find that it has always said/implied that Superchargers were zero-emission. They have even claimed, repeatedly, that they would be 100% solar, off-grid and "zombie apocalypse-proof."

dailykanban.com/2015/05/27/tes…
Read 7 tweets
21 Feb
These three paragraphs, about IBM's failure to deliver on Watson Health's soaring ambitions, hold several important lessons relevant to AI and AVs. Though very different, health care and driving are both promising but deeply challenging areas for AI. wsj.com/articles/ibms-…
First: lots of data doesn't solve every problem. Particularly when the costs of failure are high, as they are in health care and driving, achieving the necessary level of consistent accuracy is very difficult. As the level of accuracy/reliability rises, the challenge deepens.
Second: The "customization problem" that Ng refers to here is one reason that most serious AV developers are pursuing SAE Level 4 (geofenced) autonomy. Limiting the domain and tailoring systems to it is key to achieving and validating safety-critical levels of AI performance.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!