To encourage one another to stay positive, they cite exciting corporate partnerships such as... @Goodyear Tire & Rubber.
Maybe I can help them perform a sanity check before they pin their hopes on this promising "customer".
Believers of the Goodyear/Helium partnership envision a future where your vehicle connects to the internet... through its tires.
Inspiring.
I'd hate to burst their bubble that a Goodyear Ventures investment with the goal to "learn about new mobility" isn't proof of real demand.
To learn more, I watched this presentation by @AbhijitCVC of Goodyear Ventures:
Does Goodyear have a plan for giving tire sensor devices their own internet connection?
Not really, says Abhijit: "Assume we have the right sensors, and we don't yet..."
This slide presents Goodyear's next steps. They're going to "explore use cases", i.e. they don't yet have a clear one.
When you're pinning your hopes on a vague, futuristic-sounding slide from a corporate VC, you're on track to be a #BloatedMVP of Goodyear Blimp proportions.
I'm not spreading gratuitous FUD here. I've just heard enough startup pitches to call out the #HollowAbstractions.
The reality on the ground is that Goodyear has no compelling use case. Neither does Helium's LoRaWAN network to date.
@AbhijitCVC .@AbhijitCVC thanks for the pic of your team gluing a sensor to a tire, but the key question that remains unvalidated is whether a car's tires should be architected to connect directly to the internet using a decentralized network of LoRa hotspots.
He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:
“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:
Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.
Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:
Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".
But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?
That's when we risk permanently disempowering humanity.