Why do so many people in tech still worship @balajis?
The man is a charlatan, a mockery of tech discourse.
He has no etiquette in interviews, dodging every question with a rambling GPT-3 smokescreen.
Need proof? Just watch his latest interview 👇
.@balajis defines a supposedly key term, "Network State", and gives it four properties:
1. Aligned online community 2. Capacity for collective action 3. Crowdfunded territory 4. Diplomatic recognition
But, as you'll see, Balaji has no idea what his own term is supposed to mean.
Let's see if @balajis can answer a single easy question from @stephsmithio about what a Network State is.
Here's the question...
Q: Why does a Network State need to have the 4th property, diplomatic recognition?
Try to listen for a coherent answer from @balajis. Good luck.
Balaji rambles for 27 minutes.
Then, in a brief moment of lucidity, he suddenly says something coherently related to the topic at hand (diplomatic recognition):
He says you might want to create a sanctuary city where federal laws don’t get enforced.
So Steph asks a dead-simple followup: Is federal law not enforced in one plane, or are we replacing it from scratch?
Balaji begins: “It’s both… the unelected bureaucrats… no longer have power...”
Followed by a 12-minute ramble that once again doesn’t answer the question.
Keep listening to this ramble, and don’t forget how simple Steph’s question to Balaji is:
Q: Is the idea about replacing federal laws in one plane (e.g. vehicle regulation), or replacing federal laws entirely?
Why does everyone let Balaji get away with this behavior?
Publishing this kind of ramble degrades the quality of discourse. It’s impossible for listeners to follow the thread.
Either keep the guest on track during the interview, or edit in post so it makes some kind of sense.
Balaji goes on to describe communities that:
* Keep “digital sabbath”, go offline 12 hrs/day
* Keep a "keto-kosher" diet
But how do these fall under Balaji’s 4-point definition of Network State? Why do they need to crowdfund territory? Why do they need diplomatic recognition?
Here Balaji seems to be saying that The Network State has to do with the difficulty of innovating in atoms.
If we could coordinate people together virtually, then we could manipulate atoms better…?
Again, not supporting his claim with either logical reasoning or an example.
Steph asks *again*: Does Balaji envision a Network State that rewrites the entire legal framework, or just one area?
This time he rambles about fixing “one moral failing".
Still doesn’t address the question of whether a Network State in the US would obey any federal law or not!
Balaji gives one final example of a Network State:
A Christian community.
Wow, what a techno-optimistic concept! Only in the 21st century do we have the terminology to describe a concept like that.
Unlike some folks, I don't see Balaji’s performances as works of genius.
I think a guy who neglects to give coherent answers to questions during his 3 hours of freewheeling improv is crapping on standards of discourse.
I'm begging all podcasters to stop letting this crap slide.
Eliezer Yudkowsky can warn humankind that 𝘐𝘧 𝘈𝘯𝘺𝘰𝘯𝘦 𝘉𝘶𝘪𝘭𝘥𝘴 𝘐𝘵, 𝘌𝘷𝘦𝘳𝘺𝘰𝘯𝘦 𝘋𝘪𝘦𝘴 and hit the NYTimes bestseller list, but he won’t get upvoted to the top of LessWrong.
That’s intentional. The rationalist community thinks aggregating community support for important claims is “political fighting”.
Unfortunately, the idea that some other community will strongly rally behind @ESYudkowsky's message while LessWrong “stays out of the fray” and purposely prevents mutual knowledge of support from being displayed, is unrealistic.
Our refusal to aggregate the rationalist community beliefs into signals and actions is why we live in a world where rationalists with double-digit P(Doom)s join AI race companies instead of AI pause movements.
We let our community become a circular firing squad. What did we expect?
Please watch my new interview with Holly Elmore (@ilex_ulmus), Executive Director of @PauseAIUS, on “the circular firing squad” a.k.a. “the crab bucket”:
◻️ On the “If Anyone Builds It, Everyone Dies” launch
◻️ What's Your P(Doom)™
◻️ Liron's Review of IABIED
◻️ Encouraging early joiners to a movement
◻️ MIRI's communication issues
◻️ Government officials' review of IABIED
◻️ Emmett Shear's review of IABIED
◻️ Michael Nielsen's review of IABIED
◻️ New York Times's Review of IABIED
◻️ Will MacAskill's Review of IABIED
◻️ Clara Collier's Review of IABIED
◻️ Vox's Review of IABIED
◻️ The circular firing squad
◻️ Why our kind can't cooperate
◻️ LessWrong's lukewarm show of support
◻️ The “missing mood” of support
◻️ Liron's “Statement of Support for IABIED”
◻️ LessWrong community's reactions to the Statement
◻️ Liron & Holly's hopes for the community
Search “Doom Debates” in your podcast player or watch on YouTube:
Also featuring a vintage LW comment by @ciphergoth
He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:
“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:
Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.
Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:
Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".
But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?
That's when we risk permanently disempowering humanity.