Liron Shapira Profile picture
Nov 17, 2022 14 tweets 6 min read Read on X
Why do so many people in tech still worship @balajis?

The man is a charlatan, a mockery of tech discourse.

He has no etiquette in interviews, dodging every question with a rambling GPT-3 smokescreen.

Need proof? Just watch his latest interview 👇

.@balajis defines a supposedly key term, "Network State", and gives it four properties:

1. Aligned online community
2. Capacity for collective action
3. Crowdfunded territory
4. Diplomatic recognition

But, as you'll see, Balaji has no idea what his own term is supposed to mean.
Let's see if @balajis can answer a single easy question from @stephsmithio about what a Network State is.

Here's the question...

Q: Why does a Network State need to have the 4th property, diplomatic recognition?

Try to listen for a coherent answer from @balajis. Good luck.
Balaji rambles for 27 minutes.

Then, in a brief moment of lucidity, he suddenly says something coherently related to the topic at hand (diplomatic recognition):

He says you might want to create a sanctuary city where federal laws don’t get enforced.
So Steph asks a dead-simple followup: Is federal law not enforced in one plane, or are we replacing it from scratch?

Balaji begins: “It’s both… the unelected bureaucrats… no longer have power...”

Followed by a 12-minute ramble that once again doesn’t answer the question.
Keep listening to this ramble, and don’t forget how simple Steph’s question to Balaji is:

Q: Is the idea about replacing federal laws in one plane (e.g. vehicle regulation), or replacing federal laws entirely?
Why does everyone let Balaji get away with this behavior?

Publishing this kind of ramble degrades the quality of discourse. It’s impossible for listeners to follow the thread.

Either keep the guest on track during the interview, or edit in post so it makes some kind of sense.
Balaji goes on to describe communities that:
* Keep “digital sabbath”, go offline 12 hrs/day
* Keep a "keto-kosher" diet

But how do these fall under Balaji’s 4-point definition of Network State? Why do they need to crowdfund territory? Why do they need diplomatic recognition?
Here Balaji seems to be saying that The Network State has to do with the difficulty of innovating in atoms.

If we could coordinate people together virtually, then we could manipulate atoms better…?

Again, not supporting his claim with either logical reasoning or an example.
Steph asks *again*: Does Balaji envision a Network State that rewrites the entire legal framework, or just one area?

This time he rambles about fixing “one moral failing".

Still doesn’t address the question of whether a Network State in the US would obey any federal law or not!
Balaji gives one final example of a Network State:

A Christian community.

Wow, what a techno-optimistic concept! Only in the 21st century do we have the terminology to describe a concept like that.
Unlike some folks, I don't see Balaji’s performances as works of genius.

I think a guy who neglects to give coherent answers to questions during his 3 hours of freewheeling improv is crapping on standards of discourse.

I'm begging all podcasters to stop letting this crap slide.
@balajis I'm glad I helped some readers break the Balaji spell.

This thread also serves as a reference you can point friends to any time. ImageImage

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Liron Shapira

Liron Shapira Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @liron

May 23
🤔 How did Farcaster, a small crypto/Web3 version of Twitter, just raise $150M at a $1B valuation?

Dune Analytics says they have 45k daily active users, which is microscopic.

But even that number is MASSIVELY inflated by spambots.

How & why I think @a16z is siphoning money 🧵Image
Image
Image
What kind of user-generated content is being posted to Farcaster?

Basically imagine reading through a crypto-themed Discord server, but reskinning the interface so it's like you're reading Twitter.

I saw a trickle of content from real users, and spambots using generative AI 👇
Why did VCs like @a16z think $1B is an appealing valuation for a startup whose active user base is comparable to that of a niche Discord sever?

Are we suddenly in another crypto bubble, where everyone is a paper unicorn again?

I think there are 3 explanations…
Read 18 tweets
Oct 7, 2023
Dario Amodei's P(doom) is 10–25%.

CEO and Co-Founder of @AnthropicAI.
“I often try to focus on the 75–90% chance where things will go right.” Image
From today's @loganbartshow, worth a watch:
Read 5 tweets
Jul 14, 2023
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️

Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality.

🧵 Here's the proof: https://t.co/2o3gUgmuqXtwitter.com/i/web/status/1…

Image
Image
1. BULVERISM

Marc indulges in constant Bulverism:

He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:

“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Read 36 tweets
Jun 22, 2023
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:

Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.

Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:

Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".

But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?

That's when we risk permanently disempowering humanity.
Read 10 tweets
May 18, 2023
Incredibly high-stakes claim from OpenAI’s alignment team lead.

If he’s wrong, he’s a killer.
The former safety lead at OpenAI isn’t confident in the tractability of the problem.
OpenAI, like other AI cos, act like they don't need stricter assurance of any hope of alignment.

They act like burning the remaining time to superintelligence is an acceptable move.

Just because this assumption is normally not questioned, doesn't mean it's not fatal if wrong.
Read 5 tweets
May 18, 2023
Important debate happening between @sama and @ESYudkowsky via their respective podcast interviews:
Sam's interview with Bari Weiss: podcasts.apple.com/us/podcast/ai-…
Eliezer's interview on @loganbartshow:
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(