Liron Shapira Profile picture
Aug 3, 2022 21 tweets 12 min read Read on X
.@a16z, @Accel and @paradigm looked directly at a blatant Ponzi scheme, Axie Infinity.

They called it “play-to-earn” and invested $311M into its parent company.

Then it collapsed.

How Web3 VCs stumbled into funding a Ponzi. 🧵
First, let’s be clear that Axie really is a Ponzi scheme. To quote @matt_levine's newsletter from last month: “Axie Infinity is a Ponzi scheme”.
This viral Substack essay by @packyM, published July 19, 2021, is representative of last year’s peak VC hype around Axie: notboring.co/p/infinity-rev…
For context, VCs are used to measuring companies by their revenue growth.

Exponential growth is taken as a sign that a startup has discovered a lucrative new business model.

Axie’s revenue growth was off the charts.
Packy was fully in that VC mindset when he attempted to explain Axie’s revenue growth to his readers.

He claimed it was a result of a uniquely blockchain-enabled innovation: “letting players keep most of the value created”.
A similar VC-brain analysis was promulgated by @cdixon.

Chris claimed that Axie’s revenue growth came from the success of innovations like “letting users participate in the financial upside of the community” and “lowering take rates”.
Thanks to VCs' misguidedly enthusiastic affirmation of Axie's business model, the term “play-to-earn” (P2E) became a trendy buzzword in the VC community, and more money poured in.
On Oct 5, 2021, @a16z led a $152M funding round in Axie Infinity maker @SkyMavisHQ.

@AriannaSimpson, the partner who joined Sky Mavis’s board, described the game as “a new way for anyone to turn their time into money”.
Doubts or concerns about the possibility of Axie being a Ponzi scheme were not mentioned or addressed in Packy's Substack post.

He did like a comment arguing why Ponzis are similar to regular businesses.
Multiple comments dating back to July 2021 correctly identified why Axie is a Ponzi. None received a like or reply from Packy.
Prior to Packy’s post, others had already caught on to the fact that Axie is structurally a Ponzi scheme.

If any VC had searched “Axie Infinity” on YouTube, they'd have been able to watch this helpful animated explainer that was posted Jul 4, 2021:
One YouTube commenter, who had been hoping for an opportunity to “play to earn”, decided to steer clear of Axie.

He correctly understood that, despite the potential for large sums of money, he was more likely to *lose* money playing the game than to earn it.
Axie Infinity’s revenue peaked in Aug 2021, just one month after Packy’s post.

The truth is, we've never been looking at the revenue graph of a promising startup. We’ve been looking at the revenue graph of a Ponzi.
After the scheme played out its inevitable collapse, news organizations picked up the story that thousands of players have been left financially worse off.

But it shouldn’t have been a surprise to any qualified analyst that this Ponzi scheme played out the way Ponzis always do.
What lesson can we take away from Axie’s rise and fall?

Crypto throws a wrench into the usual analysis of a startup’s growth.

Analysts must distinguish positive-sum demand vs demand for easy money. Don’t be fooled by what users are saying - even users can’t tell the difference.
I was hoping some VCs would publicly acknowledge last year's errors in judgement.

They simply didn't realize that a Ponzi scheme could put up the same dazzling growth numbers as a high-performing startup.

Recent commenters on Packy's Substack post hoped for a post-mortem too.
By the way, while this thread has largely focused on Packy, it’s only because he’s been one of Axie’s biggest champions. He also advises @a16z Crypto, the largest fund of its kind.

Rest assured, plenty of other VCs were making the same arguments for Axie and play-to-earn gaming.
I recently put together this video to show how @cdixon and @AriannaSimpson are framing the situation.

I'm not seeing any acknowledgement/accountability around the serious flaws in their 2021 Axie analysis, which is disappointing this late in the game.

The closest we have to a post-mortem is from @DKThomp’s excellent podcast last month.

How is @packyM reflecting on the decision to fund and hype Axie?

Here’s his answer.

I also highly recommend the full interview: theringer.com/2022/7/26/2327…
Packy’s post-mortem is that “[Axie’s] economics weren’t ready for that kind of usage”, which couldn’t possibly have been predicted in 2017-18.

Really?

The graph that stoked his excitement in Jul ‘21 was pure Ponzi.

I’d love to see more accountability from VCs who hyped this.
If you’re still on the fence about whether Web3 is a #HollowAbstraction, consider this question:

If crypto VCs can stumble into funding a Ponzi on the blockchain, where else are they unintentionally misleading everyone?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Liron Shapira

Liron Shapira Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @liron

May 23
🤔 How did Farcaster, a small crypto/Web3 version of Twitter, just raise $150M at a $1B valuation?

Dune Analytics says they have 45k daily active users, which is microscopic.

But even that number is MASSIVELY inflated by spambots.

How & why I think @a16z is siphoning money 🧵Image
Image
Image
What kind of user-generated content is being posted to Farcaster?

Basically imagine reading through a crypto-themed Discord server, but reskinning the interface so it's like you're reading Twitter.

I saw a trickle of content from real users, and spambots using generative AI 👇
Why did VCs like @a16z think $1B is an appealing valuation for a startup whose active user base is comparable to that of a niche Discord sever?

Are we suddenly in another crypto bubble, where everyone is a paper unicorn again?

I think there are 3 explanations…
Read 18 tweets
Oct 7, 2023
Dario Amodei's P(doom) is 10–25%.

CEO and Co-Founder of @AnthropicAI.
“I often try to focus on the 75–90% chance where things will go right.” Image
From today's @loganbartshow, worth a watch:
Read 5 tweets
Jul 14, 2023
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️

Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality.

🧵 Here's the proof: https://t.co/2o3gUgmuqXtwitter.com/i/web/status/1…

Image
Image
1. BULVERISM

Marc indulges in constant Bulverism:

He spends much time labeling and psychoanalyzing the people who disagree with him, instead of focusing on the substance of why he thinks their object-level claims are wrong and his are right.en.wikipedia.org/wiki/Bulverism
He accuses AI doomers of being “bootleggers”, which he explains means “self-interested opportunists who stand to financially profit” from claiming AI x-risk is a serious worry:

“If you are paid a salary or receive grants to foster AI panic… you are probably a Bootlegger.”
Read 36 tweets
Jun 22, 2023
Thread of @pmarca's logically-flimsy AGI survivability claims 🧵
Claim 1:

Marc claims it’s a “category error” to argue that a math-based system will have human-like properties — that rogue AI is a 𝘭𝘰𝘨𝘪𝘤𝘢𝘭𝘭𝘺 𝘪𝘯𝘤𝘰𝘩𝘦𝘳𝘦𝘯𝘵 concept.

Actually, an AI might overpower humanity, or it might not. Either outcome is logically coherent.
Claim 2:

Marc claims rogue unaligned superintelligent AI is unlikely because AIs can "engage in moral thinking".

But what happens when a superintelligent goal-optimizing AI is run with anything less than perfect morality?

That's when we risk permanently disempowering humanity.
Read 10 tweets
May 18, 2023
Incredibly high-stakes claim from OpenAI’s alignment team lead.

If he’s wrong, he’s a killer.
The former safety lead at OpenAI isn’t confident in the tractability of the problem.
OpenAI, like other AI cos, act like they don't need stricter assurance of any hope of alignment.

They act like burning the remaining time to superintelligence is an acceptable move.

Just because this assumption is normally not questioned, doesn't mean it's not fatal if wrong.
Read 5 tweets
May 18, 2023
Important debate happening between @sama and @ESYudkowsky via their respective podcast interviews:
Sam's interview with Bari Weiss: podcasts.apple.com/us/podcast/ai-…
Eliezer's interview on @loganbartshow:
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(