Liron Shapira Profile picture
Host of Doom Debates — disagreements that must be resolved before the world ends.
Oct 29 4 tweets 5 min read
Today's Extropic launch raises some new red flags.

I started following this company when they refused to explain the input/output spec of what they're building, leaving us waiting to get clarification.)

Here are 3 red flags from today:

1. From extropic.ai/writing/inside…
"Generative AI is Sampling. All generative AI algorithms are essentially procedures for sampling from probability distributions. Training a generative AI model corresponds to inferring the probability distribution that underlies some training data, and running inference corresponds to generating samples from the learned distribution. Because TSUs sample, they can run generative AI algorithms natively."

This is a highly misleading claim about the algorithms that power the most useful modern AIs, on the same level of gaslighting as calling the human brain a thermodynamic computer. IIUC, as far as anyone knows, the majority of AI computation work doesn't match the kind of input/output that you can feed into Extropic's chip.

The page says:
"The next challenge is to figure out how to combine these primitives in a way that allows for capabilities to be scaled up to something comparable to today’s LLMs. To do this, we will need to build very large TSUs, and invent new algorithms that can consume an arbitrary amount of probabilistic computing resources."

Do you really need to build large TSUs to research if it's possible for LLM-like applications to benefit from this hardware? I would've thought it'd be worth spending a couple $million on investigating that question via a combination of theory and modern cloud supercomputing hardware, instead spending over $30M on building hardware that might be a bridge to nowhere.

Their own documentation for their THRML (their open-source library) says:
"THRML provides GPU‑accelerated tools for block sampling on sparse, heterogeneous graphs, making it a natural place to prototype today and experiment with future Extropic hardware."

You're saying you lack a way your hardware primitives could *in principle* be applied toward useful applications of some kind, and you created this library to help do that kind of research using today's GPUs…

Why would you not just release the Python library earlier (THRML), do the bottlenecking research you said needs to be done earlier, and engage the community to help get you an answer to this key question by now? Why were you waiting all this time to first launch this extremely niche tiny-scale hardware prototype to come forward explaining this make-or-break bottleneck, and only publicize your search for potential partners who have some kind of relevant "probabilistic workloads" now, when the cost of not doing so was $30M and 18 months?

2. From extropic.ai/writing/tsu-10…:
"We developed a model of our TSU architecture and used it to estimate how much energy it would take to run the denoising process shown in the above animation. What we found is that DTMs running on TSUs can be about 10,000x more energy efficient than standard image generation algorithms on GPUs."

I'm already seeing people on Twitter hyping the 10,000x claim. But for anyone who's followed the decades-long saga of quantum computing companies claiming to achieve "quantum supremacy" with similar kinds of hype figures, you know how much care needs to go into defining that kind of benchmark.

In practice, it tends to be extremely hard to point to situations where a classical computing approach *isn't* much faster than the claimed "10,000x faster thermodynamic computing" approach. The Extropic team knows this, but opted not to elaborate on the kind of conditions that could reproduce this hype benchmark that they wanted to see go viral.

3. The terminology they're using has been switched to "probabilistic computer": "We designed the world’s first scalable probabilistic computer." Until today, they were using "thermodynamic computer" as their term, and claimed in writing that "the brain is a thermodynamic computer".

One could give them the benefit of the doubt for pivoting their terminology. It's just that they were always talking nonsense about the brain being a "thermodynamic computer" (in my view the brain is neither that nor a "quantum computer"; it's very much a neural net algorithm running on a classical computer architecture). And this sudden terminology pivot is consistent with them having been talking nonsense on that front.

Now for the positives:

* Some hardware actually got built!
* They explain how its input/output potentially has an application in denoising, though as mentioned, are vague on the details of the supposed "10,000x thermodynamic supremacy" they achieved on this front.

Overall:

This is about what I expected when I first started asking for the input output 18 months ago.

They had a legitimately cool idea for a piece of hardware, but didn't have a plan for making it useful, but had some vague beginnings of some theoretical research that had a chance to make it useful.

They seem to have made respectable progress getting the hardware into production (the amount that $30M buys you), and seemingly less progress finding reasons why this particular hardware, even after 10 generations of successor refinements, is going to be of use to anyone.

Going forward, instead of responding to questions about your device's input/output by "mogging" people and saying it's a company secret, and tweeting hyperstitions about your thermodynamic god, I'd recommend being more open about the seemingly giant life-or-death question that the tech community might actually be interested in helping you answer: whether someone can write a Python program in your simulator with stronger evidence that some kind of useful "thermodynamic supremacy" with your hardware concept can ever be a thing.Image

Remember "Come work for us if you want to rebuild the web on top of blockchain"?

It's like that: The thing they're asking you to do for them is likely incoherent.

More importantly, they don't need to build hardware to settle it one way or the other IMO.
Sep 27 5 tweets 2 min read
Eliezer Yudkowsky can warn humankind that 𝘐𝘧 𝘈𝘯𝘺𝘰𝘯𝘦 𝘉𝘶𝘪𝘭𝘥𝘴 𝘐𝘵, 𝘌𝘷𝘦𝘳𝘺𝘰𝘯𝘦 𝘋𝘪𝘦𝘴 and hit the NYTimes bestseller list, but he won’t get upvoted to the top of LessWrong.

That’s intentional. The rationalist community thinks aggregating community support for important claims is “political fighting”.

Unfortunately, the idea that some other community will strongly rally behind @ESYudkowsky's message while LessWrong “stays out of the fray” and purposely prevents mutual knowledge of support from being displayed, is unrealistic.

Our refusal to aggregate the rationalist community beliefs into signals and actions is why we live in a world where rationalists with double-digit P(Doom)s join AI race companies instead of AI pause movements.

We let our community become a circular firing squad. What did we expect?

Please watch my new interview with Holly Elmore (@ilex_ulmus), Executive Director of @PauseAIUS, on “the circular firing squad” a.k.a. “the crab bucket”:

◻️ On the “If Anyone Builds It, Everyone Dies” launch
◻️ What's Your P(Doom)™
◻️ Liron's Review of IABIED
◻️ Encouraging early joiners to a movement
◻️ MIRI's communication issues
◻️ Government officials' review of IABIED
◻️ Emmett Shear's review of IABIED
◻️ Michael Nielsen's review of IABIED
◻️ New York Times's Review of IABIED
◻️ Will MacAskill's Review of IABIED
◻️ Clara Collier's Review of IABIED
◻️ Vox's Review of IABIED
◻️ The circular firing squad
◻️ Why our kind can't cooperate
◻️ LessWrong's lukewarm show of support
◻️ The “missing mood” of support
◻️ Liron's “Statement of Support for IABIED”
◻️ LessWrong community's reactions to the Statement
◻️ Liron & Holly's hopes for the community Search “Doom Debates” in your podcast player or watch on YouTube:
May 23, 2024 18 tweets 7 min read
🤔 How did Farcaster, a small crypto/Web3 version of Twitter, just raise $150M at a $1B valuation?

Dune Analytics says they have 45k daily active users, which is microscopic.

But even that number is MASSIVELY inflated by spambots.

How & why I think @a16z is siphoning money 🧵Image
Image
Image
What kind of user-generated content is being posted to Farcaster?

Basically imagine reading through a crypto-themed Discord server, but reskinning the interface so it's like you're reading Twitter.

I saw a trickle of content from real users, and spambots using generative AI 👇
Oct 7, 2023 5 tweets 2 min read
Dario Amodei's P(doom) is 10–25%.

CEO and Co-Founder of @AnthropicAI.
“I often try to focus on the 75–90% chance where things will go right.” Image
Jul 14, 2023 36 tweets 9 min read
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️

Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality.

🧵 Here's the proof: https://t.co/2o3gUgmuqXtwitter.com/i/web/status/1…

Image
Image
1. BULVERISM

Marc indulges in constant Bulverism:

He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
Jun 22, 2023 10 tweets 4 min read
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵 Claim 1:

Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.

Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
May 18, 2023 5 tweets 2 min read
Incredibly high-stakes claim from OpenAI’s alignment team lead.

If he’s wrong, he’s a killer. The former safety lead at OpenAI isn’t confident in the tractability of the problem.
May 18, 2023 4 tweets 2 min read
Important debate happening between @sama and @ESYudkowsky via their respective podcast interviews: Sam's interview with Bari Weiss: podcasts.apple.com/us/podcast/ai-…
May 12, 2023 4 tweets 2 min read
Is there really a normal-looking guy on CNBC right now discussing AI doom via instrumental convergence? twitter.com/i/web/status/1… Clipped from

H/t @jrichlive
May 10, 2023 6 tweets 2 min read
Seeing above the clouds

Today, AGI is "in the clouds" where it's foggy to predict exact traits.

Soon, it'll be above the clouds where the sky is clear and we can predict an important property:

It'll behave like a general-purpose planning engine, plus some goal spec driving it. twitter.com/i/web/status/1… A general-purpose planning engine plus some goal spec driving it is the convergent place to end up.

Capabilities required to achieve one goal effectively, generalize to capabilities to achieve any other goal effectively.

That's a logical property of goal-maximization.
May 9, 2023 4 tweets 2 min read
But how will the AGI physically kill us??? 🤖💣

@ESYudkowsky names a couple specific methods:
* Pathogen-aided mind control
* Artificial life forms that reproduce in our atmosphere

These are just human-understandable lower bounds, to help you gain respect for superintelligence. Clipped from this week's incredible episode of @loganbartshow:
Apr 18, 2023 5 tweets 3 min read
Max @tegmark's plan for AI safety "did not pan out".

He is therefore calling for an immediate slowdown on AI capabilities. "The most dangerous things you can do with an AI… teach it to write code… connect it to the internet."
@tegmark
Apr 11, 2023 13 tweets 5 min read
I expected gaslighting in 2022 from a naked emperor

I didn't expect gaslighting in 2023 from a naked homeless guy

@molly0xFFF has a good rundown 👇 Image When you fail Finance 101 because you're one of those people who think you're smarter when you're stoned Image
Apr 2, 2023 6 tweets 3 min read
1/ @OpenAI is gaslighting us about alignment.

When they say GPT is "aligned", they just mean typical users don't get immoral responses.

But... anyone can bypass this so-called alignment! Anyone can access the full intelligence of the powerful AGI system under the hood! 2/ @labenz, a member of OpenAI's GPT-4 red team, says they shipped a production AI without addressing his outstanding reports of unalignment:

Was it safe to launch this unaligned AI?

Yes… because it isn't superintelligent yet.

That's the *only* reason!
Mar 29, 2023 5 tweets 2 min read
Wow, great to see this!

And pausing at GPT-4 isn’t exactly a Luddite move. It still means we’re getting multiple years of stunning insights and applications by digesting this already insane breakthrough. It feels like the folks who are most unhappy about this idea are coming from a good place of techno-optimism. I get it, I’m transhumanist.

For AGI though, it’s nuclear-level dangerous. A wrong move can be permanent game over.

And I’m just saying the median AI researcher view!
Mar 28, 2023 5 tweets 2 min read
I'm getting too old for this shit

blockworks.co/news/ticketmas… Such use case, very utility
Mar 28, 2023 5 tweets 3 min read
.@Helium is completely fucked.

Also, Amazon just announced their low-power IOT network that covers 90% of the US is open to developers: theverge.com/2023/3/28/2365… This generation will grow up never knowing what it's like for a dog collar to not have both long range and long battery life.
Mar 13, 2023 8 tweets 3 min read
Instagram and Facebook are officially done with NFTs! 🍾 Could blockchains help shift power on social media away from platforms and into the hands of creators?

NO THEY FUCKING CAN'T!

Feb 21, 2023 20 tweets 10 min read
Hey what if AI is going to literally slaughter every living creature on this planet in the next 3 years?

Watch @ESYudkowsky’s new interview on @BanklessHQ and see why that's not even a joke 🤯😵



🧵 Here are my notes and abridged clips: To set the stage:

Eliezer doesn't think the current generation of Large Language Model AIs can end the world.

So hopefully AI progress now gets stuck for 10 years.

But that's probably too optimistic.
Nov 17, 2022 14 tweets 6 min read
Why do so many people in tech still worship @balajis?

The man is a charlatan, a mockery of tech discourse.

He has no etiquette in interviews, dodging every question with a rambling GPT-3 smokescreen.

Need proof? Just watch his latest interview 👇

.@balajis defines a supposedly key term, "Network State", and gives it four properties:

1. Aligned online community
2. Capacity for collective action
3. Crowdfunded territory
4. Diplomatic recognition

But, as you'll see, Balaji has no idea what his own term is supposed to mean.
Sep 9, 2022 7 tweets 4 min read
Today @a16z crypto partner @AriannaSimpson said this about Axie Infinity:

"Duh, it worked." Recall that millions of ordinary folks paid $500-1000+ for admission into this "play-to-earn game", hoping to make a living earning SLP tokens.

Then the token price crashed to nothing, leaving all but the earliest players with a financial loss.

Because it was a Ponzi by design. Image