Eric S. Raymond Profile picture
Sep 25 1 tweets 2 min read Read on X
Antifa wants an insurrection leading to a legitimacy collapse and a communist/left-anarchist revolution. But there's a very basic question: where's it going to find the troops, and how's it going to train them?

In one of my previous analysis posts I identified three overlapping groups that make up Antifa's membership: agents of influence, overproduced elites, and damage cases. At present, the agents of influence are relying on the damage cases as troops, but that doesn't scale up well; they're around 3% of the population, which is enough for a terror campaign and a very low-level insurgency, but not enough to contest ground with formed units of loyalist troops.

This is why it's important to keep an eye on what the John Brown gun clubs, Redneck Revolt, and the Socialist Rifle Association are doing. These are the organizational nuclei for building the militarized formations Antifa would need in the later stages of an insurgency ramping up to full-scale armed revolution on the Castroite or Maoist model.

Specifically, keep an eye on who these nuclei are recruiting. That predicts what the later stages of the insurrection will look like, assuming it happens at all.

An important constraint is that the two pools Marxist theory tells them to rely on - workers and peasants - aren't really available. In the US, their nearest equivalents are the social strata most likely to have Trump flags on their trucks.

Broadly speaking I think there are two possible recruitment strategies. One centers on overproduced elites, the other on nonwhites with racial grievances. Probably both will get tried, but both have serious problems.

The problem with recruiting overproduced elites to be your shooters is that pretty much by definition they think they ought to be running things, not taking lethal risks on somebody else's orders. Also, you're in competition for these people with the Gramscians, who are also offering power via Communist revolution, and with a much lower risk of actually getting shot at.

Recruiting non-whites with racial grievances might seem much more promising, but they're going to have a competence problem. The US Army has figured out the hard way that it can't make effective soldiers out of people with an IQ below about 85. Now go look up the mean IQs for major non-white groups in the US population and estimate the percentage below that cutoff. It is not a happy figure for Antifa recruiters to be contemplating.

I don't know how Antifa is going to solve this problem. I don't know if it's solvable at all. I do know that if I were the counterterror guys in their fusion centers I would be keeping a very close eye on these organizations and their membership growth curves.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eric S. Raymond

Eric S. Raymond Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @esrtweet

Feb 26
Time for me to put on my explainer hat again. This time it's "post-rationalist".

But in order to do that I'm going to have to explain some history and at least three other factions.

My warrant for doing this rests on three facts. One: I am a very experienced anthropologist of internet cultures. Two: I am personally acquainted with a number of the leading figures in the cluster of subcultures I'm about to describe. Three: my past writing had some influence on how these subcultures developed - not a lot, but enough that people inside these subcultures notice the connection.

To start with, the rationalists. This is a network of people who collected around the blog "Less Wrong", and the writings of Eliezer Yudkowsky (@ESYudkowsky) beginning around 2005. Another very prominent figure in this group is Scott Alexander.

Modern rationalists are aware that the term "rationalist" has some other meanings in historical and academic philosophy that cause confusion. In particular, in the language of academic philosophy modern rationalists could more accurately be described as "skeptical empiricists" or "predictivists". This is regarded as unfortunate, but we're stuck with the term.

The rationalists have three sets of concerns. One can be described as rationality technique. This is a set of heuristics and language conventions intended to help people think more clearly and make better arguments - better in the sense of converging on truth and correctness faster.

Some rationalist language has percolated out into the wider culture: "virtue signaling" and "steelmanning" are two notable examples. Also, "updating" and "priors" as terms about belief maintenance.

Two other movements are closely connected with modern rationalism and sometimes indistinguishable from it. One is "effective altruism", a network of people who are trying to do charitable giving in the most effective way possible by paying careful and quantitative attention to outcomes.

The third set of concerns is around the destructive potential of AI. Many, though not all, people who call themselves "rationalists" deeply fear that humanity might soon be wiped out by rogue AI and are concerned with attempting to develop measures to constrain it to behave in a recognizably moral way: the shorthand for this is "the alignment problem".

I don't know of a good neutral term for these people - it could be "safetyists" but as far as I'm aware nobody actually uses that. People who disagree with them often call them "doomers".

Now for the unpleasant part. Rationalism has proven very effective at attracting bright, alienated kids - especially bright autists with poor social skills. When they congregate, most notably in the Bay Area around tech industries, they sometimes start exhibiting cult-like behavior - social isolation, sexual deviance, fanaticism.

Some of the movement's leaders have at least tacitly encouraged this. Others have failed to discourage it as much as they might have. At an extreme, it has led to manifestations like the Zizians - quite literally a transsexual murder cult. Less extremely, there has been a rash of suicides, insanity, and drug deaths associated with rationalist group houses.

There's a hostile reading of the history of the rationalist subculture that considers both rationality technique and effective altruism to have been deliberately constructed as recruitment funnels for the doomer cult.

Now I can describe post-rationalists. These are people who have consciously bailed out of the cultish and doomer aspects of the rationalist subculture while keeping a firm hold of rationality technique.

Often post-rats have friends inside the cult-like parts of rationalist subculture; the boundary between these social networks is somewhat fuzzy.

1/2
Related terms:

Grey tribe: Taking a term from one of Scott Alexander's more influential essays, post-rationalists sometimes describe themselves as "Grey Tribe", though that term has political connotations that "rationalist" does not. It's meant to index people who do not consider themselves part of either the blue or red political tribes.

TPOT or tpot: "that part of Twitter" is a group of X posters that overlaps with post-rats a lot, but has more of a techie and tinkerer edge.

2/2
I guess I should clarify my own relationship to these subcultures.

I was trying to develop something like rationalist technique for decades myself before the modern systematization, and have cheerfully embraced it.

Post-rats think of me as one of them, even as a sort of tribal elder whom they somewhat ironically salute. I'm okay with that.

I've posted articles on Less Wrong a couple of times.

Because of my prior history, some of the post-rats I hang out with have waggishly suggested that I should describe myself as a "pre-rat" or even an "ur-rat".

I'm opposed to doomerism. I think it's premises have been completely falsified by the way AI has actually developed, as systems without agency or goals in the world.

At minimum, I believe AI risk theory needs a fundamental rethink, which it's not getting because doomers having attachment to it resembling that of an apocalyptic religious cult.

I'm also opposed to effective altruism. The goal is noble, but I believe the movement is compromised by a fundamental misunderstanding of what the evolved impulse to charity is for.

This leads to some pathological overruns that are related to historical problems with utilitarianism as a philosophy. I'll probably write more about this in the future.
Read 4 tweets
Mar 30, 2024
1/ Since the level of public consternation about the xz back door seems to be increasing rather than decreasing, I want to point out something simple and important:
2/ Open source worked the way it's supposed to. Some hacker noticed something that made him curious, poked at it because hackers are like that, and because the code was open and availablwe for inpection, diagnosed the problem before any serious harm was done.
3/ If a PLA cracker had managed to insert that Trojan into some vital closed-source code it is far less likely that it would have been remedied so soon. In fact, what we should really be worrying about is inserted vulns in code we *cannot* inspect.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(