Liron Shapira Profile picture
Consistently candid AI doom pointer-outer
Potato Of Reason Profile picture 1 subscribed
Oct 7, 2023 5 tweets 2 min read
Dario Amodei's P(doom) is 10–25%.

CEO and Co-Founder of @AnthropicAI.
“I often try to focus on the 75–90% chance where things will go right.” Image
Jul 14, 2023 36 tweets 9 min read
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️

Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality.

🧵 Here's the proof: https://t.co/2o3gUgmuqXtwitter.com/i/web/status/1…

Image
Image
1. BULVERISM

Marc indulges in constant Bulverism:

He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
Jun 22, 2023 10 tweets 4 min read
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵 Claim 1:

Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.

Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
May 18, 2023 5 tweets 2 min read
Incredibly high-stakes claim from OpenAI’s alignment team lead.

If he’s wrong, he’s a killer. The former safety lead at OpenAI isn’t confident in the tractability of the problem.
May 18, 2023 4 tweets 2 min read
Important debate happening between @sama and @ESYudkowsky via their respective podcast interviews: Sam's interview with Bari Weiss: podcasts.apple.com/us/podcast/ai-…
May 12, 2023 4 tweets 2 min read
Is there really a normal-looking guy on CNBC right now discussing AI doom via instrumental convergence? twitter.com/i/web/status/1… Clipped from

H/t @jrichlive
May 10, 2023 6 tweets 2 min read
Seeing above the clouds

Today, AGI is "in the clouds" where it's foggy to predict exact traits.

Soon, it'll be above the clouds where the sky is clear and we can predict an important property:

It'll behave like a general-purpose planning engine, plus some goal spec driving it. twitter.com/i/web/status/1… A general-purpose planning engine plus some goal spec driving it is the convergent place to end up.

Capabilities required to achieve one goal effectively, generalize to capabilities to achieve any other goal effectively.

That's a logical property of goal-maximization.
May 9, 2023 4 tweets 2 min read
But how will the AGI physically kill us??? 🤖💣

@ESYudkowsky names a couple specific methods:
* Pathogen-aided mind control
* Artificial life forms that reproduce in our atmosphere

These are just human-understandable lower bounds, to help you gain respect for superintelligence. Clipped from this week's incredible episode of @loganbartshow:
Apr 18, 2023 5 tweets 3 min read
Max @tegmark's plan for AI safety "did not pan out".

He is therefore calling for an immediate slowdown on AI capabilities. "The most dangerous things you can do with an AI… teach it to write code… connect it to the internet."
@tegmark
Apr 11, 2023 13 tweets 5 min read
I expected gaslighting in 2022 from a naked emperor

I didn't expect gaslighting in 2023 from a naked homeless guy

@molly0xFFF has a good rundown 👇 Image When you fail Finance 101 because you're one of those people who think you're smarter when you're stoned Image
Apr 2, 2023 6 tweets 3 min read
1/ @OpenAI is gaslighting us about alignment.

When they say GPT is "aligned", they just mean typical users don't get immoral responses.

But... anyone can bypass this so-called alignment! Anyone can access the full intelligence of the powerful AGI system under the hood! 2/ @labenz, a member of OpenAI's GPT-4 red team, says they shipped a production AI without addressing his outstanding reports of unalignment:

Was it safe to launch this unaligned AI?

Yes… because it isn't superintelligent yet.

That's the *only* reason!
Mar 29, 2023 5 tweets 2 min read
Wow, great to see this!

And pausing at GPT-4 isn’t exactly a Luddite move. It still means we’re getting multiple years of stunning insights and applications by digesting this already insane breakthrough. It feels like the folks who are most unhappy about this idea are coming from a good place of techno-optimism. I get it, I’m transhumanist.

For AGI though, it’s nuclear-level dangerous. A wrong move can be permanent game over.

And I’m just saying the median AI researcher view!
Mar 28, 2023 5 tweets 2 min read
I'm getting too old for this shit

blockworks.co/news/ticketmas… Such use case, very utility
Mar 28, 2023 5 tweets 3 min read
.@Helium is completely fucked.

Also, Amazon just announced their low-power IOT network that covers 90% of the US is open to developers: theverge.com/2023/3/28/2365… This generation will grow up never knowing what it's like for a dog collar to not have both long range and long battery life.
Mar 13, 2023 8 tweets 3 min read
Instagram and Facebook are officially done with NFTs! 🍾 Could blockchains help shift power on social media away from platforms and into the hands of creators?

NO THEY FUCKING CAN'T!

Feb 21, 2023 20 tweets 10 min read
Hey what if AI is going to literally slaughter every living creature on this planet in the next 3 years?

Watch @ESYudkowsky’s new interview on @BanklessHQ and see why that's not even a joke 🤯😵



🧵 Here are my notes and abridged clips: To set the stage:

Eliezer doesn't think the current generation of Large Language Model AIs can end the world.

So hopefully AI progress now gets stuck for 10 years.

But that's probably too optimistic.
Nov 17, 2022 14 tweets 6 min read
Why do so many people in tech still worship @balajis?

The man is a charlatan, a mockery of tech discourse.

He has no etiquette in interviews, dodging every question with a rambling GPT-3 smokescreen.

Need proof? Just watch his latest interview 👇

.@balajis defines a supposedly key term, "Network State", and gives it four properties:

1. Aligned online community
2. Capacity for collective action
3. Crowdfunded territory
4. Diplomatic recognition

But, as you'll see, Balaji has no idea what his own term is supposed to mean.
Sep 9, 2022 7 tweets 4 min read
Today @a16z crypto partner @AriannaSimpson said this about Axie Infinity:

"Duh, it worked." Recall that millions of ordinary folks paid $500-1000+ for admission into this "play-to-earn game", hoping to make a living earning SLP tokens.

Then the token price crashed to nothing, leaving all but the earliest players with a financial loss.

Because it was a Ponzi by design. Image
Aug 29, 2022 8 tweets 2 min read
Notes from the recent 3-hour Mark Zuckerberg / Joe Rogan podcast: Zuck has been a sporty guy ever since his parents made him play three varsity sports in high school.

He does a high-concentration sport first thing in the morning before dealing with the onslaught of messages.

The latest sports he does are surfing, hydrofoiling, and jiu jitsu.
Aug 23, 2022 17 tweets 7 min read
Coinbase says they don’t list any securities, end of story.

Except Braintrust's token, BTRST, is a security.

And BTRST is listed on Coinbase.

Why Coinbase is gaslighting about securities 🧵 The Howey Test defines what it means to buy a security:

“An investment of money for a reasonable expectation of profits to be derived from the efforts of others.”

In the non-crypto world, it's established that every company's stock is a security.
Aug 21, 2022 26 tweets 15 min read
Braintrust (@usebraintrust) is one of the highest-profile attempts at a Web3 use case.

With $123M raised, investors say it's a decentralized network disrupting Upwork.

My analysis shows it's a centralized staffing agency juicing up growth metrics.

Who does your brain trust?🧵 Braintrust presents itself as a manifestation of @cdixon’s “insight” that Web2’s take rate is Web3’s opportunity.

So how exactly is it able to operate with a lower take rate? The answer will shock you…