Autonomous vehicles were always nonsense. I wrote about them a year ago, when Uber declared massive losses. Uber's profitability story was always, "Sure, we're losing big *now*, but once we create AVs, we can fire our drivers and make a bundle."
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
But Uber never came *close* to building an AV. After blowing $2.5b, the company invented a car whose mean-distance-to-fatal-crash was *half a mile*. Uber had to *pay* another company - *$400 million!* - to take the self-driving unit off its hands. 3/
It's tempting to say that Uber just deluded itself into thinking that AVs were a viable, near-term technology. 4/
But $2.5b was a *bargain*, because it allowed the company's original investors (notably the Saudi royals) to offload their Uber shares on credulous suckers when the company IPOed. 5/
Likewise Tesla, a company that has promised fully self-driving autonomous vehicles "within two years" for more than a decade. The story that Teslas will someday drive themselves is key to attracting retail investors to the company. 6/
Tesla's overvaluation isn't solely a product of the cult of personality around Musk, nor is it just that investors can't read a balance-sheet and so miss the fact that Tesla is reliant upon selling the carbon-credits that allow gas-guzzling SUVs to fill America's streets. 7/
Key to Tesla's claims to eventual profitability was that AVs would overcome geometry itself, ending the Red Queen's Race whereby adding more cars to the road means more roads, which means everything gets farther apart, which means you need more cars - lather, rinse, repeat. 8/
Geometry hates cars, but Elon Musk hates public transit (he says you might end up seated next to "a serial killer"). So Musk spun this story where tightly orchestrated AVs would best geometry and create big cities served speedy, individualized private vehicles. 9/
You could even make passive income from your Tesla, turning it over to drive strangers (including, presumably, serial killers?) around as a taxicab.
But Teslas are no closer to full self-driving than Ubers. In fact, *no one* has come *close* to making an AV. 10/
In a characteristically brilliant and scorching article for @business, Max @Chafkin takes stock of the failed AV project:
Chafkin calculates that the global R&D budget for AVs has now exceeded *$100 billion*, and demonstrates that we have next to nothing to show for it, and that whatever you think you know about AV success is just spin, hype and bullshit. 12/
Take the much-vaunted terribleness of human drivers, which the AV industry likes to tout. It's true that the other dumdums on the road cutting you off and changing lanes without their turn-signals are pretty bad drivers, but actual, professional drivers are *amazing*. 13/
The average school-bus driver clocks up *500 million miles* without a fatal crash (but of course, bus drivers are part of the public transit system). 14/
Even dopes like you and me are better than you may think - while cars *do* kill the shit out of Americans, it's because Americans drive *so goddamned much*. 15/
US traffic deaths are a mere one per 100 million miles driven, and most of those deaths are due to recklessness, not inability. Drunks, speeders, texters and sleepy drivers cause traffic fatalities - they may be skilled drivers, but they are also reckless. 16/
But even the most reckless driver is safer than a driverless car, which "lasts a few seconds before crapping out." The best robot drivers are Waymos, which mostly operate in the sunbelt, "because they still can’t handle weather patterns trickier than Partly Cloudy." 17/
Waymo claims to have driven 20m miles - that is, 4% of the distance we'd expect a human school-bus driver to go before having a fatal wreck. Tesla, meanwhile, has stopped even reporting how many miles its autopilot has mananged on public roads. 18/
The last time it disclosed, in 2019, the total was *zero*.
Using "deep learning" to solve the problems of self-driving cars is a dead-end. 19/
As NYU psych prof Gary Marcus told Chafkin, "deep learning is something similar to memorization...It only works if the situations are sufficiently akin." 20/
Which is why self-driving cars are so useless when they come up against something unexpected - human drivers weaving through traffic, cyclists, an eagle, a drone, a low-flying plane, a deer, even some pigeons on the road. 21/
Self-driving car huxters call this "the pogo-stick problem" - as in "you never can tell when someone will try to cross the road on a pogo-stick." They propose coming up with strict rules for *humans* to make life easier for *robots*.
But as stupid as this is, it's even stupider than it appears at first blush. It's not that AVs are confused by pogo sticks - they're confused by *shadows* and *clouds* and *squirrels*. 23/
They're confused by left turns that are a little different than the last left turn they tried.
If you've been thinking that AVs were right around the corner, don't feel too foolish. The AV companies have certainly acted like they believed their own bullshit. 24/
Chafkin reminds us of the high-stakes litigation when AV engineer Anthony Levandowski left Google for Uber and was sued for stealing trade secrets. 25/
The result was millions in fines (Levandowski declared bankruptcy) and even a prison sentence for Levandowski (Trump pardoned him, seemingly at the behest of Peter Thiel and other Trumpist tech cronies). 26/
Why would companies go to all that trouble if they weren't serious about their own claims?
It's possible that they are, but that doesn't mean we have to take those claims at face-value ourselves. Companies often get high on their own supplies. 27/
The litigation over Levandowski can be thought of as a species of #CritiHype, @STS_News's extraordinarily useful term for criticism that serves to bolster the claims of its target:
Today, Levandowski has scaled back his plans to build autonomous vehicles. Instead, he's built autonomous dump-trucks that never leave a literal sandbox, and trundle back and forth on the same road all day, moving rocks from a pit to a crusher. 30/
$100 billion later, that's what the AV market has produced. 31/
Shelter is a human necessity and a human right. The decision to turn housing into the major speculative asset class for retain investors and Wall Street has made housing a disaster for people *with* houses - and a catastrophe for those without. 1/
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
America has a terrible, accelerating homelessness problem. Many of us share this problem - obviously, people without houses have the worst of it. But no one benefits from mass homelessness - it is a stain on the human soul to live among people who are unsheltered. 3/
Machine learning's promise is decisions at scale: using software to classify inputs (and, often, act on them) at a speed and scale that would be prohibitively expensive or even impossible using flesh-and-blood humans. 1/
There aren't enough idle people to train half of them to read all the tweets in the other half's timeline and put them in ranked order based on their predictions about the ones you'll like best. ML promises to do a good-enough job that you won't mind. 2/
Turning half the people in the world into chauffeurs for the other half would precipitate civilizational collapse, but ML promises self-driving cars for everyone affluent and misanthropic enough that they don't want to and don't have to take the bus. 3/