Liron Shapira Profile picture
Jul 26, 2022 11 tweets 7 min read Read on X
.@Helium, often cited as one of the best examples of a Web3 use case, has received $365M of investment led by @a16z.

Regular folks have also been convinced to spend $250M buying hotspot nodes, in hopes of earning passive income.

The result? Helium's total revenue is $6.5k/month
Members of the r/helium subreddit have been increasingly vocal about seeing poor Helium returns.

On average, they spent $400-800 to buy a hotspot. They were expecting $100/month, enough to recoup their costs and enjoy passive income.

Then their earnings dropped to only $20/mo.
These folks maintain false hope of positive ROI. They still don’t realize their share of data-usage revenue isn’t actually $20/month; it’s $0.01/month.

The other $19.99 is a temporary subsidy from investment in growing the network, and speculation on the value of the $HNT token.
Meanwhile, according to Helium network rules, $300M (30M $HNT) per year gets siphoned off by @novalabs_, the corporation behind Helium.

This "revenue" on the books, which comes mainly from retail speculators, is presumably what justified such an aggressive investment by @a16z.
.@cdixon's "mental model" thread on Helium claims that this kind of network can't be built in Web2 because it requires token incentives.

But the facts indicate Web2 *won’t* incentivize Helium because demand is low. Even with a network of 500k hotspots, revenue is nonexistent.
The complete lack of end-user demand for Helium should not have come as a surprise.

A basic LoRaWAN market analysis would have revealed that this was a speculation bubble around a fake, overblown use case.
The ongoing Axie Infinity debacle is a similar case of @a16z's documented thought process being shockingly disconnected from reality, wherein skeptics get vindicated within a matter of months at the expense of unsophisticated end users turned investors.

More generally, I posit the two keys to understanding Web3 are:

1) Beware of easy money schemes
2) Beware of #HollowAbstractions

When proponents like @cdixon promise riches to come via abstract "mental models", we can gently guide them to focus on money flows and use cases.
I've posed the question to the @a16z partner involved with Axie Infinity, "how does money flow into the system?"

He blocked me.

The tech community deserves better.

Let's continue to press for answers to simple questions about Web3's money flows and use cases.
@helium @a16z 🤔

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Liron Shapira

Liron Shapira Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @liron

May 23
🤔 How did Farcaster, a small crypto/Web3 version of Twitter, just raise $150M at a $1B valuation?

Dune Analytics says they have 45k daily active users, which is microscopic.

But even that number is MASSIVELY inflated by spambots.

How & why I think @a16z is siphoning money 🧵Image
Image
Image
What kind of user-generated content is being posted to Farcaster?

Basically imagine reading through a crypto-themed Discord server, but reskinning the interface so it's like you're reading Twitter.

I saw a trickle of content from real users, and spambots using generative AI 👇
Why did VCs like @a16z think $1B is an appealing valuation for a startup whose active user base is comparable to that of a niche Discord sever?

Are we suddenly in another crypto bubble, where everyone is a paper unicorn again?

I think there are 3 explanations…
Read 18 tweets
Oct 7, 2023
Dario Amodei's P(doom) is 10–25%.

CEO and Co-Founder of @AnthropicAI.
“I often try to focus on the 75–90% chance where things will go right.” Image
From today's @loganbartshow, worth a watch:
Read 5 tweets
Jul 14, 2023
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️

Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality.

🧵 Here's the proof: https://t.co/2o3gUgmuqXtwitter.com/i/web/status/1…

Image
Image
1. BULVERISM

Marc indulges in constant Bulverism:

He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:

“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Read 36 tweets
Jun 22, 2023
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:

Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.

Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:

Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".

But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?

That's when we risk permanently disempowering humanity.
Read 10 tweets
May 18, 2023
Incredibly high-stakes claim from OpenAI’s alignment team lead.

If he’s wrong, he’s a killer.
The former safety lead at OpenAI isn’t confident in the tractability of the problem.
OpenAI, like other AI cos, act like they don't need stricter assurance of any hope of alignment.

They act like burning the remaining time to superintelligence is an acceptable move.

Just because this assumption is normally not questioned, doesn't mean it's not fatal if wrong.
Read 5 tweets
May 18, 2023
Important debate happening between @sama and @ESYudkowsky via their respective podcast interviews:
Sam's interview with Bari Weiss: podcasts.apple.com/us/podcast/ai-…
Eliezer's interview on @loganbartshow:
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(