Baumol's cost disease doesn't happen by accident. Labor productivity can't rise if the state bans innovation, as it does in healthcare, education, and housing.
Try using AI to automate, say, medical imaging — and see how much the state interferes. statnews.com/2020/02/28/ai-…
The cost of medical diagnosis is not simply the cost in cost, but also the cost in time and convenience. In many studies, AI outperforms all but the very best doctors — and does so inexpensively and quickly.
Only a fraction of biomedical founders who've been obstructed by the rat's nest of red tape ever come forward, out of fear of retaliation, so multiply every story like this by 100. massdevice.com/mobile-mims-lo…
Why is the price of housing so high? Because the Fed printed $1T+ to prop up the price of mostly worthless mortgage-backed securities, and because city governments like SF heavily restrict new construction.
Reducing the cost of housing is not impossible. China can build infrastructure 100-1000X faster than the US because they allow innovation in construction.
Compare hours to years. Then do a financial model. That kind of improvement in time slashes rent.
How about education? Loan subsidies, state-gated accreditation, inhibition of charter schools...from K-12 to higher ed, that's why kids are *still* getting on yellow school buses and attending in-person lectures.
That was the pre-2019 status quo. But these institutions failed so hard during COVID that they've subsidized the tech alternatives. We're finally unlocking innovation in online ed, healthcare (eg mRNA vaccines), even housing (via remote work).
Automate faster than they inflate.
Love this thread? Want dozens more specific examples of how regulation holds back innovation?
Well, knock yourself out. Here's a lecture I gave in 2013 on the subject...I think it holds up reasonably well today. github.com/ladamalina/cou…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Both America and China were invested in the illusion that China wasn't already the world's strongest economy.
Psychologically, it suited the incumbent to appear strong. So America downplayed China's numbers.
Strategically, it suited the disruptor to appear weak. So China also sandbagged its own numbers.
But the illusion is becoming harder to maintain.
In retrospect, all the China cope over the last decade or so was really just the stealth on the Chinese stealth bomber.
Hide your strength and bide your time was Deng's strategy. Amazingly, denying China's strength somehow also became America's strategy.
For example, all the cope on China's demographics somehow being uniquely bad...when they have 1.4B+ people that crush every international science competition with minimal drug addiction, crime, or fatherlessness...and when their demographic problems have obvious robotic solutions.
Or, for another example, how MAGA sought to mimic China's manufacturing buildout and industrial policy without deeply understanding China's strengths in this area, which is like competing with Google by setting up a website. Vague references to 1945 substituted for understanding the year 2025.
One consequence of the cope is that China knows far more about America's strengths than vice versa. Surprisingly few Americans interested in re-industrialization have ever set foot in Shenzhen. Those who have, like @Molson_Hart, understand what modern China actually is.
Anyway, what @DoggyDog1208 calls the "skull chart" is the same phenomenon @yishan and I commented on months ago. Once China truly enters a vertical, like electric cars or solar, their pace of ascent[1] is so rapid that incumbents often don't even have time to react.
Now apply this at country level. China has flipped America so quickly on so many axes[2], particularly military ones like hypersonics or military-adjacent ones like power, that it can no longer be contained.
A major contributing factor was the dollar illusion. All that money printing made America think it was richer than China. And China was happy to let America persist in the illusion. But an illusion it was. Yet another way in which Keynesianism becomes the epitaph of empire.
The first kind of retard uses AI everywhere, even where it shouldn’t be used.
The second kind of retard sees AI everywhere, even where it isn’t used.
Usually, it’s obvious what threads are and aren’t AI-written.
But some people can’t tell the difference between normal writing and AI writing. And because they can’t tell the difference, they’ll either overuse AI…or accuse others of using AI!
What we actually may need are built-in statistical AI detectors for every public text field. Paste in a URL into an archive.is-like interface and get back the probability that any div on the page is AI-generated.
In general my view is that AI text shouldn’t be used raw. It’s like a search engine result, it’s lorem ipsum. Useful for research but not final results. AI code is different, but even that requires review. AI visuals are different still, and you can sometimes use them directly.
We’re still developing these conventions, as the tech itself is of course a moving target. But it is interesting that even technologists (who see the huge time-savings that AI gives for, say, data analysis or vibe coding) are annoyed by AI slop. Imagine how much the people who don’t see the positive parts of AI may hate AI.
TLDR: slop is the new spam, and we’ll need new tools and conventions to defeat it.
I agree email spammers will keep adapting.
But I don’t know if a typical poster will keep morphing their content in such a way.
AI prompting scales, because prompting is just typing.
But AI verifying doesn’t scale, because verifying AI output involves much more than just typing.
Sometimes you can verify by eye, which is why AI is great for frontend, images, and video. But for anything subtle, you need to read the code or text deeply — and that means knowing the topic well enough to correct the AI.
Researchers are well aware of this, which is why there’s so much work on evals and hallucination.
However, the concept of verification as the bottleneck for AI users is under-discussed. Yes, you can try formal verification, or critic models where one AI checks another, or other techniques. But to even be aware of the issue as a first class problem is half the battle.
For users: AI verifying is as important as AI prompting.
I love everything @karpathy has done to popularize vibe coding.
But then after you prototype with vibe coding, you need to get to production with right coding.
And that means AI verifying, not just AI prompting. That’s easy when output is visual, much harder when it’s textual.
@karpathy The question when using AI is: how can I inexpensively verify the output of this AI model is correct?
We take for granted the human eye, which is amazing at finding errors in images, videos, and user interfaces.
But we need other kinds of verifiers for other domains.