If you plot a histogram of latency before and after the introduction of a load balancer, you'll often find that average latency gets a bit worse (as you need to do two hops: load balancer and then server), but worst case latency gets way better.
Often an acceptable tradeoff.
Similarly, if you plot a histogram of expected financial profit before & after buying a collar, you'll find that the average profit gets a bit worse (due to the cost of the collar) but worst case profit gets way better.
Bitcoin ushered in the possibility of truly free markets, fully decentralized, high risk & high reward.
Often, however, thesis and antithesis form a synthesis. The success of stablecoins show how valuable volatility reduction can be in some contexts. stablecoinstats.com
In web2, the financial plumbing[1] that makes things possible is hidden from users. The volatility is hidden, as is the cost in privacy.
In web3, that financial plumbing is made more transparent. The volatility is visible, as is the cost in coins.
[1] adbutler.com/blog/article/w…
Think about ads: how often do you click? And how often do you actually buy? Rarely, right?
That means conversions are rare events. Rarity means high financial variance. Giant web2 companies can buffer this variance, this volatility, so it's not visible.
Oh, but it exists.
This thread prompted in part by @levie's thoughtful comments.
In short, I recognize that we do need tools to control the visible financial volatility of web3's coins.
But I want to note that this is in some ways better than the *invisible* financial volatility of web2's ads.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Both America and China were invested in the illusion that China wasn't already the world's strongest economy.
Psychologically, it suited the incumbent to appear strong. So America downplayed China's numbers.
Strategically, it suited the disruptor to appear weak. So China also sandbagged its own numbers.
But the illusion is becoming harder to maintain.
In retrospect, all the China cope over the last decade or so was really just the stealth on the Chinese stealth bomber.
Hide your strength and bide your time was Deng's strategy. Amazingly, denying China's strength somehow also became America's strategy.
For example, all the cope on China's demographics somehow being uniquely bad...when they have 1.4B+ people that crush every international science competition with minimal drug addiction, crime, or fatherlessness...and when their demographic problems have obvious robotic solutions.
Or, for another example, how MAGA sought to mimic China's manufacturing buildout and industrial policy without deeply understanding China's strengths in this area, which is like competing with Google by setting up a website. Vague references to 1945 substituted for understanding the year 2025.
One consequence of the cope is that China knows far more about America's strengths than vice versa. Surprisingly few Americans interested in re-industrialization have ever set foot in Shenzhen. Those who have, like @Molson_Hart, understand what modern China actually is.
Anyway, what @DoggyDog1208 calls the "skull chart" is the same phenomenon @yishan and I commented on months ago. Once China truly enters a vertical, like electric cars or solar, their pace of ascent[1] is so rapid that incumbents often don't even have time to react.
Now apply this at country level. China has flipped America so quickly on so many axes[2], particularly military ones like hypersonics or military-adjacent ones like power, that it can no longer be contained.
A major contributing factor was the dollar illusion. All that money printing made America think it was richer than China. And China was happy to let America persist in the illusion. But an illusion it was. Yet another way in which Keynesianism becomes the epitaph of empire.
The first kind of retard uses AI everywhere, even where it shouldn’t be used.
The second kind of retard sees AI everywhere, even where it isn’t used.
Usually, it’s obvious what threads are and aren’t AI-written.
But some people can’t tell the difference between normal writing and AI writing. And because they can’t tell the difference, they’ll either overuse AI…or accuse others of using AI!
What we actually may need are built-in statistical AI detectors for every public text field. Paste in a URL into an archive.is-like interface and get back the probability that any div on the page is AI-generated.
In general my view is that AI text shouldn’t be used raw. It’s like a search engine result, it’s lorem ipsum. Useful for research but not final results. AI code is different, but even that requires review. AI visuals are different still, and you can sometimes use them directly.
We’re still developing these conventions, as the tech itself is of course a moving target. But it is interesting that even technologists (who see the huge time-savings that AI gives for, say, data analysis or vibe coding) are annoyed by AI slop. Imagine how much the people who don’t see the positive parts of AI may hate AI.
TLDR: slop is the new spam, and we’ll need new tools and conventions to defeat it.
I agree email spammers will keep adapting.
But I don’t know if a typical poster will keep morphing their content in such a way.
AI prompting scales, because prompting is just typing.
But AI verifying doesn’t scale, because verifying AI output involves much more than just typing.
Sometimes you can verify by eye, which is why AI is great for frontend, images, and video. But for anything subtle, you need to read the code or text deeply — and that means knowing the topic well enough to correct the AI.
Researchers are well aware of this, which is why there’s so much work on evals and hallucination.
However, the concept of verification as the bottleneck for AI users is under-discussed. Yes, you can try formal verification, or critic models where one AI checks another, or other techniques. But to even be aware of the issue as a first class problem is half the battle.
For users: AI verifying is as important as AI prompting.
I love everything @karpathy has done to popularize vibe coding.
But then after you prototype with vibe coding, you need to get to production with right coding.
And that means AI verifying, not just AI prompting. That’s easy when output is visual, much harder when it’s textual.
@karpathy The question when using AI is: how can I inexpensively verify the output of this AI model is correct?
We take for granted the human eye, which is amazing at finding errors in images, videos, and user interfaces.
But we need other kinds of verifiers for other domains.