Liron Shapira Profile picture
Aug 2, 2022 6 tweets 5 min read Read on X
.@Helium supporters have been accusing me of FUD.

To encourage one another to stay positive, they cite exciting corporate partnerships such as... @Goodyear Tire & Rubber.

Maybe I can help them perform a sanity check before they pin their hopes on this promising "customer".
Believers of the Goodyear/Helium partnership envision a future where your vehicle connects to the internet... through its tires.

Inspiring.

I'd hate to burst their bubble that a Goodyear Ventures investment with the goal to "learn about new mobility" isn't proof of real demand.
To learn more, I watched this presentation by @AbhijitCVC of Goodyear Ventures:

Does Goodyear have a plan for giving tire sensor devices their own internet connection?

Not really, says Abhijit: "Assume we have the right sensors, and we don't yet..."
This slide presents Goodyear's next steps. They're going to "explore use cases", i.e. they don't yet have a clear one.

When you're pinning your hopes on a vague, futuristic-sounding slide from a corporate VC, you're on track to be a #BloatedMVP of Goodyear Blimp proportions.
I'm not spreading gratuitous FUD here. I've just heard enough startup pitches to call out the #HollowAbstractions.

The reality on the ground is that Goodyear has no compelling use case. Neither does Helium's LoRaWAN network to date.

I fixed that tweet for you, @AbhijitCVC.
@AbhijitCVC .@AbhijitCVC thanks for the pic of your team gluing a sensor to a tire, but the key question that remains unvalidated is whether a car's tires should be architected to connect directly to the internet using a decentralized network of LoRa hotspots.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Liron Shapira

Liron Shapira Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @liron

Oct 29
Today's Extropic launch raises some new red flags.

I started following this company when they refused to explain the input/output spec of what they're building, leaving us waiting to get clarification.)

Here are 3 red flags from today:

1. From extropic.ai/writing/inside…
"Generative AI is Sampling. All generative AI algorithms are essentially procedures for sampling from probability distributions. Training a generative AI model corresponds to inferring the probability distribution that underlies some training data, and running inference corresponds to generating samples from the learned distribution. Because TSUs sample, they can run generative AI algorithms natively."

This is a highly misleading claim about the algorithms that power the most useful modern AIs, on the same level of gaslighting as calling the human brain a thermodynamic computer. IIUC, as far as anyone knows, the majority of AI computation work doesn't match the kind of input/output that you can feed into Extropic's chip.

The page says:
"The next challenge is to figure out how to combine these primitives in a way that allows for capabilities to be scaled up to something comparable to today’s LLMs. To do this, we will need to build very large TSUs, and invent new algorithms that can consume an arbitrary amount of probabilistic computing resources."

Do you really need to build large TSUs to research if it's possible for LLM-like applications to benefit from this hardware? I would've thought it'd be worth spending a couple $million on investigating that question via a combination of theory and modern cloud supercomputing hardware, instead spending over $30M on building hardware that might be a bridge to nowhere.

Their own documentation for their THRML (their open-source library) says:
"THRML provides GPU‑accelerated tools for block sampling on sparse, heterogeneous graphs, making it a natural place to prototype today and experiment with future Extropic hardware."

You're saying you lack a way your hardware primitives could *in principle* be applied toward useful applications of some kind, and you created this library to help do that kind of research using today's GPUs…

Why would you not just release the Python library earlier (THRML), do the bottlenecking research you said needs to be done earlier, and engage the community to help get you an answer to this key question by now? Why were you waiting all this time to first launch this extremely niche tiny-scale hardware prototype to come forward explaining this make-or-break bottleneck, and only publicize your search for potential partners who have some kind of relevant "probabilistic workloads" now, when the cost of not doing so was $30M and 18 months?

2. From extropic.ai/writing/tsu-10…:
"We developed a model of our TSU architecture and used it to estimate how much energy it would take to run the denoising process shown in the above animation. What we found is that DTMs running on TSUs can be about 10,000x more energy efficient than standard image generation algorithms on GPUs."

I'm already seeing people on Twitter hyping the 10,000x claim. But for anyone who's followed the decades-long saga of quantum computing companies claiming to achieve "quantum supremacy" with similar kinds of hype figures, you know how much care needs to go into defining that kind of benchmark.

In practice, it tends to be extremely hard to point to situations where a classical computing approach *isn't* much faster than the claimed "10,000x faster thermodynamic computing" approach. The Extropic team knows this, but opted not to elaborate on the kind of conditions that could reproduce this hype benchmark that they wanted to see go viral.

3. The terminology they're using has been switched to "probabilistic computer": "We designed the world’s first scalable probabilistic computer." Until today, they were using "thermodynamic computer" as their term, and claimed in writing that "the brain is a thermodynamic computer".

One could give them the benefit of the doubt for pivoting their terminology. It's just that they were always talking nonsense about the brain being a "thermodynamic computer" (in my view the brain is neither that nor a "quantum computer"; it's very much a neural net algorithm running on a classical computer architecture). And this sudden terminology pivot is consistent with them having been talking nonsense on that front.

Now for the positives:

* Some hardware actually got built!
* They explain how its input/output potentially has an application in denoising, though as mentioned, are vague on the details of the supposed "10,000x thermodynamic supremacy" they achieved on this front.

Overall:

This is about what I expected when I first started asking for the input output 18 months ago.

They had a legitimately cool idea for a piece of hardware, but didn't have a plan for making it useful, but had some vague beginnings of some theoretical research that had a chance to make it useful.

They seem to have made respectable progress getting the hardware into production (the amount that $30M buys you), and seemingly less progress finding reasons why this particular hardware, even after 10 generations of successor refinements, is going to be of use to anyone.

Going forward, instead of responding to questions about your device's input/output by "mogging" people and saying it's a company secret, and tweeting hyperstitions about your thermodynamic god, I'd recommend being more open about the seemingly giant life-or-death question that the tech community might actually be interested in helping you answer: whether someone can write a Python program in your simulator with stronger evidence that some kind of useful "thermodynamic supremacy" with your hardware concept can ever be a thing.Image


Remember "Come work for us if you want to rebuild the web on top of blockchain"?

It's like that: The thing they're asking you to do for them is likely incoherent.

More importantly, they don't need to build hardware to settle it one way or the other IMO.
Helpful reply thread from David, a probabilistic machine learning expert:

Read 4 tweets
Sep 27
Eliezer Yudkowsky can warn humankind that 𝘐𝘧 𝘈𝘯𝘺𝘰𝘯𝘦 𝘉𝘶𝘪𝘭𝘥𝘴 𝘐𝘵, 𝘌𝘷𝘦𝘳𝘺𝘰𝘯𝘦 𝘋𝘪𝘦𝘴 and hit the NYTimes bestseller list, but he won’t get upvoted to the top of LessWrong.

That’s intentional. The rationalist community thinks aggregating community support for important claims is “political fighting”.

Unfortunately, the idea that some other community will strongly rally behind @ESYudkowsky's message while LessWrong “stays out of the fray” and purposely prevents mutual knowledge of support from being displayed, is unrealistic.

Our refusal to aggregate the rationalist community beliefs into signals and actions is why we live in a world where rationalists with double-digit P(Doom)s join AI race companies instead of AI pause movements.

We let our community become a circular firing squad. What did we expect?

Please watch my new interview with Holly Elmore (@ilex_ulmus), Executive Director of @PauseAIUS, on “the circular firing squad” a.k.a. “the crab bucket”:

◻️ On the “If Anyone Builds It, Everyone Dies” launch
◻️ What's Your P(Doom)™
◻️ Liron's Review of IABIED
◻️ Encouraging early joiners to a movement
◻️ MIRI's communication issues
◻️ Government officials' review of IABIED
◻️ Emmett Shear's review of IABIED
◻️ Michael Nielsen's review of IABIED
◻️ New York Times's Review of IABIED
◻️ Will MacAskill's Review of IABIED
◻️ Clara Collier's Review of IABIED
◻️ Vox's Review of IABIED
◻️ The circular firing squad
◻️ Why our kind can't cooperate
◻️ LessWrong's lukewarm show of support
◻️ The “missing mood” of support
◻️ Liron's “Statement of Support for IABIED”
◻️ LessWrong community's reactions to the Statement
◻️ Liron & Holly's hopes for the community
Search “Doom Debates” in your podcast player or watch on YouTube:
Also featuring a vintage LW comment by @ciphergoth Image
Read 5 tweets
May 23, 2024
🤔 How did Farcaster, a small crypto/Web3 version of Twitter, just raise $150M at a $1B valuation?

Dune Analytics says they have 45k daily active users, which is microscopic.

But even that number is MASSIVELY inflated by spambots.

How & why I think @a16z is siphoning money 🧵Image
Image
Image
What kind of user-generated content is being posted to Farcaster?

Basically imagine reading through a crypto-themed Discord server, but reskinning the interface so it's like you're reading Twitter.

I saw a trickle of content from real users, and spambots using generative AI 👇
Why did VCs like @a16z think $1B is an appealing valuation for a startup whose active user base is comparable to that of a niche Discord sever?

Are we suddenly in another crypto bubble, where everyone is a paper unicorn again?

I think there are 3 explanations…
Read 18 tweets
Oct 7, 2023
Dario Amodei's P(doom) is 10–25%.

CEO and Co-Founder of @AnthropicAI.
“I often try to focus on the 75–90% chance where things will go right.” Image
From today's @loganbartshow, worth a watch:
Read 5 tweets
Jul 14, 2023
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️

Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality.

🧵 Here's the proof: https://t.co/2o3gUgmuqXtwitter.com/i/web/status/1…

Image
Image
1. BULVERISM

Marc indulges in constant Bulverism:

He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:

“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Read 36 tweets
Jun 22, 2023
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:

Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.

Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:

Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".

But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?

That's when we risk permanently disempowering humanity.
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(