.@Helium, often cited as one of the best examples of a Web3 use case, has received $365M of investment led by @a16z.
Regular folks have also been convinced to spend $250M buying hotspot nodes, in hopes of earning passive income.
The result? Helium's total revenue is $6.5k/month
Members of the r/helium subreddit have been increasingly vocal about seeing poor Helium returns.
On average, they spent $400-800 to buy a hotspot. They were expecting $100/month, enough to recoup their costs and enjoy passive income.
Then their earnings dropped to only $20/mo.
These folks maintain false hope of positive ROI. They still don’t realize their share of data-usage revenue isn’t actually $20/month; it’s $0.01/month.
The other $19.99 is a temporary subsidy from investment in growing the network, and speculation on the value of the $HNT token.
Meanwhile, according to Helium network rules, $300M (30M $HNT) per year gets siphoned off by @novalabs_, the corporation behind Helium.
This "revenue" on the books, which comes mainly from retail speculators, is presumably what justified such an aggressive investment by @a16z.
.@cdixon's "mental model" thread on Helium claims that this kind of network can't be built in Web2 because it requires token incentives.
But the facts indicate Web2 *won’t* incentivize Helium because demand is low. Even with a network of 500k hotspots, revenue is nonexistent.
The complete lack of end-user demand for Helium should not have come as a surprise.
A basic LoRaWAN market analysis would have revealed that this was a speculation bubble around a fake, overblown use case.
The ongoing Axie Infinity debacle is a similar case of @a16z's documented thought process being shockingly disconnected from reality, wherein skeptics get vindicated within a matter of months at the expense of unsophisticated end users turned investors.
He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:
“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:
Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.
Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:
Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".
But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?
That's when we risk permanently disempowering humanity.