.@Helium, often cited as one of the best examples of a Web3 use case, has received $365M of investment led by @a16z.
Regular folks have also been convinced to spend $250M buying hotspot nodes, in hopes of earning passive income.
The result? Helium's total revenue is $6.5k/month
Members of the r/helium subreddit have been increasingly vocal about seeing poor Helium returns.
On average, they spent $400-800 to buy a hotspot. They were expecting $100/month, enough to recoup their costs and enjoy passive income.
Then their earnings dropped to only $20/mo.
These folks maintain false hope of positive ROI. They still don’t realize their share of data-usage revenue isn’t actually $20/month; it’s $0.01/month.
The other $19.99 is a temporary subsidy from investment in growing the network, and speculation on the value of the $HNT token.
Meanwhile, according to Helium network rules, $300M (30M $HNT) per year gets siphoned off by @novalabs_, the corporation behind Helium.
This "revenue" on the books, which comes mainly from retail speculators, is presumably what justified such an aggressive investment by @a16z.
.@cdixon's "mental model" thread on Helium claims that this kind of network can't be built in Web2 because it requires token incentives.
But the facts indicate Web2 *won’t* incentivize Helium because demand is low. Even with a network of 500k hotspots, revenue is nonexistent.
The complete lack of end-user demand for Helium should not have come as a surprise.
A basic LoRaWAN market analysis would have revealed that this was a speculation bubble around a fake, overblown use case.
The ongoing Axie Infinity debacle is a similar case of @a16z's documented thought process being shockingly disconnected from reality, wherein skeptics get vindicated within a matter of months at the expense of unsophisticated end users turned investors.
Eliezer Yudkowsky can warn humankind that 𝘐𝘧 𝘈𝘯𝘺𝘰𝘯𝘦 𝘉𝘶𝘪𝘭𝘥𝘴 𝘐𝘵, 𝘌𝘷𝘦𝘳𝘺𝘰𝘯𝘦 𝘋𝘪𝘦𝘴 and hit the NYTimes bestseller list, but he won’t get upvoted to the top of LessWrong.
That’s intentional. The rationalist community thinks aggregating community support for important claims is “political fighting”.
Unfortunately, the idea that some other community will strongly rally behind @ESYudkowsky's message while LessWrong “stays out of the fray” and purposely prevents mutual knowledge of support from being displayed, is unrealistic.
Our refusal to aggregate the rationalist community beliefs into signals and actions is why we live in a world where rationalists with double-digit P(Doom)s join AI race companies instead of AI pause movements.
We let our community become a circular firing squad. What did we expect?
Please watch my new interview with Holly Elmore (@ilex_ulmus), Executive Director of @PauseAIUS, on “the circular firing squad” a.k.a. “the crab bucket”:
◻️ On the “If Anyone Builds It, Everyone Dies” launch
◻️ What's Your P(Doom)™
◻️ Liron's Review of IABIED
◻️ Encouraging early joiners to a movement
◻️ MIRI's communication issues
◻️ Government officials' review of IABIED
◻️ Emmett Shear's review of IABIED
◻️ Michael Nielsen's review of IABIED
◻️ New York Times's Review of IABIED
◻️ Will MacAskill's Review of IABIED
◻️ Clara Collier's Review of IABIED
◻️ Vox's Review of IABIED
◻️ The circular firing squad
◻️ Why our kind can't cooperate
◻️ LessWrong's lukewarm show of support
◻️ The “missing mood” of support
◻️ Liron's “Statement of Support for IABIED”
◻️ LessWrong community's reactions to the Statement
◻️ Liron & Holly's hopes for the community
Search “Doom Debates” in your podcast player or watch on YouTube:
Also featuring a vintage LW comment by @ciphergoth
He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:
“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:
Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.
Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:
Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".
But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?
That's when we risk permanently disempowering humanity.