We really should be in the middle of a golden age of productivity. Within living memory, computers did not exist. Photocopiers did not exist. *Backspace* did not exist. You had to type it all by hand.
It wasn't that long ago that you couldn't search all your documents. Sort them. Back them up. Look things up. Copy/paste things. Email things. Change fonts of things. Undo things.
Instead, you had to type it all on a typewriter!
If you're doing information work, relative to your ancestors who worked with papyrus, paper, or typewriter, you are a golden god surfing on a sea of electrons. You can make things happen in seconds that would have taken them weeks, if they could do them at all.
We should also be super productive in the physical world. After all, our predecessors built railroads, skyscrapers, airplanes, and automobiles without computers or the internet. And built them fast. Using just typewriters, slide rules, & safety margins. patrickcollison.com/fast
This is a corollary to the @tylercowen / Thiel concept of the Great Stagnation. Where has all that extra productivity gone? It doesn't appear manifest in the physical world, for sure, though you can argue it *is* there in the internet world. There are a few possible theses...
Theses
1) The Great Distraction. All the productivity we gained has been frittered away on equal-and-opposite distractions like social media, games, etc.
2) The Great Dissipation. The productivity has been dissipated on things like forms, compliance, process, etc.
3) The Great Divergence. The productivity is here, it's just only harnessed by the indistractable few.
4) The Great Dilemma. The productivity has been burned in bizarre ways that require line-by-line "profiling" of everything, like this tunnel study. tunnelingonline.com/why-tunnels-in…
5) The Great Dumbness. The productivity is here, we've just made dumb decisions in the West while others have harnessed it. See for example China building a train station in nine hours vs taking 100-1000X that long to upgrade a Caltrain stop.
Btw when I say 100-1000X, I'm not kidding. November 2017 to Fall 2020 is ~3 years.
Three years vs nine hours is (3 * 365 * 24)/9 = 2920, which means the US needs almost 3000X as long to upgrade a train station as China does to build one from scratch. caltrain.com/projectsplans/…
Now, yes, I'm sure not every train station in China is built in nine hours, and wouldn't be surprised if some regions in the US (or the West more broadly) do better than SFBA. But feels likely that a systematic study would find a qualitative speed gap, 10-100X or more.
Back to main thread. I don't know the answer. But I think the line-by-line profiling approach used on the tunnels is the slow way to find out exactly what went wrong, while the look-at-other-countries-and-time-periods approach is the fast way of figuring out what might be right.
Theory: for things we can do completely on the computer, productivity has measurably accelerated. It is 100X faster to email something than to mail it.
The problem may be in the analog/digital interface. Which makes robotics the limiting factor. Actuate as fast as we compute?
Essentially, representing a complex project on disk may not be the productivity win we think it is. Humans still need to comprehend all those electronic documents to build the thing in real life.
Perhaps robotics is the true productivity unlock. We haven’t gone full digital yet.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Both America and China were invested in the illusion that China wasn't already the world's strongest economy.
Psychologically, it suited the incumbent to appear strong. So America downplayed China's numbers.
Strategically, it suited the disruptor to appear weak. So China also sandbagged its own numbers.
But the illusion is becoming harder to maintain.
In retrospect, all the China cope over the last decade or so was really just the stealth on the Chinese stealth bomber.
Hide your strength and bide your time was Deng's strategy. Amazingly, denying China's strength somehow also became America's strategy.
For example, all the cope on China's demographics somehow being uniquely bad...when they have 1.4B+ people that crush every international science competition with minimal drug addiction, crime, or fatherlessness...and when their demographic problems have obvious robotic solutions.
Or, for another example, how MAGA sought to mimic China's manufacturing buildout and industrial policy without deeply understanding China's strengths in this area, which is like competing with Google by setting up a website. Vague references to 1945 substituted for understanding the year 2025.
One consequence of the cope is that China knows far more about America's strengths than vice versa. Surprisingly few Americans interested in re-industrialization have ever set foot in Shenzhen. Those who have, like @Molson_Hart, understand what modern China actually is.
Anyway, what @DoggyDog1208 calls the "skull chart" is the same phenomenon @yishan and I commented on months ago. Once China truly enters a vertical, like electric cars or solar, their pace of ascent[1] is so rapid that incumbents often don't even have time to react.
Now apply this at country level. China has flipped America so quickly on so many axes[2], particularly military ones like hypersonics or military-adjacent ones like power, that it can no longer be contained.
A major contributing factor was the dollar illusion. All that money printing made America think it was richer than China. And China was happy to let America persist in the illusion. But an illusion it was. Yet another way in which Keynesianism becomes the epitaph of empire.
The first kind of retard uses AI everywhere, even where it shouldn’t be used.
The second kind of retard sees AI everywhere, even where it isn’t used.
Usually, it’s obvious what threads are and aren’t AI-written.
But some people can’t tell the difference between normal writing and AI writing. And because they can’t tell the difference, they’ll either overuse AI…or accuse others of using AI!
What we actually may need are built-in statistical AI detectors for every public text field. Paste in a URL into an archive.is-like interface and get back the probability that any div on the page is AI-generated.
In general my view is that AI text shouldn’t be used raw. It’s like a search engine result, it’s lorem ipsum. Useful for research but not final results. AI code is different, but even that requires review. AI visuals are different still, and you can sometimes use them directly.
We’re still developing these conventions, as the tech itself is of course a moving target. But it is interesting that even technologists (who see the huge time-savings that AI gives for, say, data analysis or vibe coding) are annoyed by AI slop. Imagine how much the people who don’t see the positive parts of AI may hate AI.
TLDR: slop is the new spam, and we’ll need new tools and conventions to defeat it.
I agree email spammers will keep adapting.
But I don’t know if a typical poster will keep morphing their content in such a way.
AI prompting scales, because prompting is just typing.
But AI verifying doesn’t scale, because verifying AI output involves much more than just typing.
Sometimes you can verify by eye, which is why AI is great for frontend, images, and video. But for anything subtle, you need to read the code or text deeply — and that means knowing the topic well enough to correct the AI.
Researchers are well aware of this, which is why there’s so much work on evals and hallucination.
However, the concept of verification as the bottleneck for AI users is under-discussed. Yes, you can try formal verification, or critic models where one AI checks another, or other techniques. But to even be aware of the issue as a first class problem is half the battle.
For users: AI verifying is as important as AI prompting.
I love everything @karpathy has done to popularize vibe coding.
But then after you prototype with vibe coding, you need to get to production with right coding.
And that means AI verifying, not just AI prompting. That’s easy when output is visual, much harder when it’s textual.
@karpathy The question when using AI is: how can I inexpensively verify the output of this AI model is correct?
We take for granted the human eye, which is amazing at finding errors in images, videos, and user interfaces.
But we need other kinds of verifiers for other domains.