This proposal is essentially privatized cyberwar on millions of innocent Russians. In my view, better to do targeted positive acts (offering asylum, helping dissidents) or targeted negative acts than untargeted broad attacks.
Using US tech power against millions of Russians in this way isn’t like a typical deplatforming, where it's a consequence-free act by a huge company on a powerless individual.
This is Russia. They may hit back, in nasty ways.
Third, retaliation may also not stop at cyberwar.
We have not yet seen ideologically motivated attacks on tech CEOs, but Russia has signaled its willingness to track, poison, and murder their enemies. Even in the middle of London.
Again, if you do this, go in eyes open.
Fourth, talk to your team.
I don't want to quite say that throwing your firm into the global cyberwar is like picking up a rifle and standing a post.
But it does expose your team & customers to targeted lifelong retaliation by nasty people. They should take that risk knowingly.
Fifth, the US military can't protect you against cyberattack.
After Solarwinds & OPM, it's clear the US is a sitting duck for cyber. They can't protect themselves, so they can't protect you. Thus any entity that decides to engage in privatized cyberwar does so at their own risk.
Sixth, the US military won't defray your costs.
If you decide to enter a privatized cyberwar, the US government is not going to pay for any damages you, your employees, and customers may suffer as a result.
And this kind of war can get extremely expensive.
Seventh, spiraling may ensue.
At the beginning of WW1, people didn't think about how things could escalate unpredictably. And many US tech cos are themselves vulnerable to cutoffs from China, a Russian ally.
This game has more than one move, and the enemy also gets a say.
The age of total cyberwar
I've been apprehensive about this for some time. The involvement of global firms can make a conflict spiral. The potential for this has been clear, but perhaps we can come back from the precipice.
Tech companies have grown accustomed to taking consequence-free actions against individuals. Arbitrary corporate deplatforming of folks across the political spectrum is common.
A state like Russia is a totally different beast.
No one thought WW1 would spiral as it did.
A great way to internationalize the conflict is for transnational tech companies to get involved in a global, privatized cyberwar. This may not play out in a feel-good way.
At a minimum, we should game out the possible consequences.
Broad attacks may be counterproductive.
Mass cyberwar like what is proposed below may actually make Russians rally around the regime, as no distinction is being made between civilian & combatant.
Both America and China were invested in the illusion that China wasn't already the world's strongest economy.
Psychologically, it suited the incumbent to appear strong. So America downplayed China's numbers.
Strategically, it suited the disruptor to appear weak. So China also sandbagged its own numbers.
But the illusion is becoming harder to maintain.
In retrospect, all the China cope over the last decade or so was really just the stealth on the Chinese stealth bomber.
Hide your strength and bide your time was Deng's strategy. Amazingly, denying China's strength somehow also became America's strategy.
For example, all the cope on China's demographics somehow being uniquely bad...when they have 1.4B+ people that crush every international science competition with minimal drug addiction, crime, or fatherlessness...and when their demographic problems have obvious robotic solutions.
Or, for another example, how MAGA sought to mimic China's manufacturing buildout and industrial policy without deeply understanding China's strengths in this area, which is like competing with Google by setting up a website. Vague references to 1945 substituted for understanding the year 2025.
One consequence of the cope is that China knows far more about America's strengths than vice versa. Surprisingly few Americans interested in re-industrialization have ever set foot in Shenzhen. Those who have, like @Molson_Hart, understand what modern China actually is.
Anyway, what @DoggyDog1208 calls the "skull chart" is the same phenomenon @yishan and I commented on months ago. Once China truly enters a vertical, like electric cars or solar, their pace of ascent[1] is so rapid that incumbents often don't even have time to react.
Now apply this at country level. China has flipped America so quickly on so many axes[2], particularly military ones like hypersonics or military-adjacent ones like power, that it can no longer be contained.
A major contributing factor was the dollar illusion. All that money printing made America think it was richer than China. And China was happy to let America persist in the illusion. But an illusion it was. Yet another way in which Keynesianism becomes the epitaph of empire.
The first kind of retard uses AI everywhere, even where it shouldn’t be used.
The second kind of retard sees AI everywhere, even where it isn’t used.
Usually, it’s obvious what threads are and aren’t AI-written.
But some people can’t tell the difference between normal writing and AI writing. And because they can’t tell the difference, they’ll either overuse AI…or accuse others of using AI!
What we actually may need are built-in statistical AI detectors for every public text field. Paste in a URL into an archive.is-like interface and get back the probability that any div on the page is AI-generated.
In general my view is that AI text shouldn’t be used raw. It’s like a search engine result, it’s lorem ipsum. Useful for research but not final results. AI code is different, but even that requires review. AI visuals are different still, and you can sometimes use them directly.
We’re still developing these conventions, as the tech itself is of course a moving target. But it is interesting that even technologists (who see the huge time-savings that AI gives for, say, data analysis or vibe coding) are annoyed by AI slop. Imagine how much the people who don’t see the positive parts of AI may hate AI.
TLDR: slop is the new spam, and we’ll need new tools and conventions to defeat it.
I agree email spammers will keep adapting.
But I don’t know if a typical poster will keep morphing their content in such a way.
AI prompting scales, because prompting is just typing.
But AI verifying doesn’t scale, because verifying AI output involves much more than just typing.
Sometimes you can verify by eye, which is why AI is great for frontend, images, and video. But for anything subtle, you need to read the code or text deeply — and that means knowing the topic well enough to correct the AI.
Researchers are well aware of this, which is why there’s so much work on evals and hallucination.
However, the concept of verification as the bottleneck for AI users is under-discussed. Yes, you can try formal verification, or critic models where one AI checks another, or other techniques. But to even be aware of the issue as a first class problem is half the battle.
For users: AI verifying is as important as AI prompting.
I love everything @karpathy has done to popularize vibe coding.
But then after you prototype with vibe coding, you need to get to production with right coding.
And that means AI verifying, not just AI prompting. That’s easy when output is visual, much harder when it’s textual.
@karpathy The question when using AI is: how can I inexpensively verify the output of this AI model is correct?
We take for granted the human eye, which is amazing at finding errors in images, videos, and user interfaces.
But we need other kinds of verifiers for other domains.