Eric S. Raymond Profile picture
Nov 24, 2024 1 tweets 1 min read Read on X
No, there was one other breach that was worse. When a bunch of public health mandarins told us that everybody had to stay locked down because of COVID, except that Black Lives Matter could riot and burn down cities because "racism is a public health issue"? And nobody in the public health establishment pushed back?

That was the moment when my confidence in the public health establishment was irreparably shattered. Including my confidence in anything they told me about vaccines.

I grant you that you've probably pointed out the second-worst betrayal, though. That we know about. Yet.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eric S. Raymond

Eric S. Raymond Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @esrtweet

Feb 26
Time for me to put on my explainer hat again. This time it's "post-rationalist".

But in order to do that I'm going to have to explain some history and at least three other factions.

My warrant for doing this rests on three facts. One: I am a very experienced anthropologist of internet cultures. Two: I am personally acquainted with a number of the leading figures in the cluster of subcultures I'm about to describe. Three: my past writing had some influence on how these subcultures developed - not a lot, but enough that people inside these subcultures notice the connection.

To start with, the rationalists. This is a network of people who collected around the blog "Less Wrong", and the writings of Eliezer Yudkowsky (@ESYudkowsky) beginning around 2005. Another very prominent figure in this group is Scott Alexander.

Modern rationalists are aware that the term "rationalist" has some other meanings in historical and academic philosophy that cause confusion. In particular, in the language of academic philosophy modern rationalists could more accurately be described as "skeptical empiricists" or "predictivists". This is regarded as unfortunate, but we're stuck with the term.

The rationalists have three sets of concerns. One can be described as rationality technique. This is a set of heuristics and language conventions intended to help people think more clearly and make better arguments - better in the sense of converging on truth and correctness faster.

Some rationalist language has percolated out into the wider culture: "virtue signaling" and "steelmanning" are two notable examples. Also, "updating" and "priors" as terms about belief maintenance.

Two other movements are closely connected with modern rationalism and sometimes indistinguishable from it. One is "effective altruism", a network of people who are trying to do charitable giving in the most effective way possible by paying careful and quantitative attention to outcomes.

The third set of concerns is around the destructive potential of AI. Many, though not all, people who call themselves "rationalists" deeply fear that humanity might soon be wiped out by rogue AI and are concerned with attempting to develop measures to constrain it to behave in a recognizably moral way: the shorthand for this is "the alignment problem".

I don't know of a good neutral term for these people - it could be "safetyists" but as far as I'm aware nobody actually uses that. People who disagree with them often call them "doomers".

Now for the unpleasant part. Rationalism has proven very effective at attracting bright, alienated kids - especially bright autists with poor social skills. When they congregate, most notably in the Bay Area around tech industries, they sometimes start exhibiting cult-like behavior - social isolation, sexual deviance, fanaticism.

Some of the movement's leaders have at least tacitly encouraged this. Others have failed to discourage it as much as they might have. At an extreme, it has led to manifestations like the Zizians - quite literally a transsexual murder cult. Less extremely, there has been a rash of suicides, insanity, and drug deaths associated with rationalist group houses.

There's a hostile reading of the history of the rationalist subculture that considers both rationality technique and effective altruism to have been deliberately constructed as recruitment funnels for the doomer cult.

Now I can describe post-rationalists. These are people who have consciously bailed out of the cultish and doomer aspects of the rationalist subculture while keeping a firm hold of rationality technique.

Often post-rats have friends inside the cult-like parts of rationalist subculture; the boundary between these social networks is somewhat fuzzy.

1/2
Related terms:

Grey tribe: Taking a term from one of Scott Alexander's more influential essays, post-rationalists sometimes describe themselves as "Grey Tribe", though that term has political connotations that "rationalist" does not. It's meant to index people who do not consider themselves part of either the blue or red political tribes.

TPOT or tpot: "that part of Twitter" is a group of X posters that overlaps with post-rats a lot, but has more of a techie and tinkerer edge.

2/2
I guess I should clarify my own relationship to these subcultures.

I was trying to develop something like rationalist technique for decades myself before the modern systematization, and have cheerfully embraced it.

Post-rats think of me as one of them, even as a sort of tribal elder whom they somewhat ironically salute. I'm okay with that.

I've posted articles on Less Wrong a couple of times.

Because of my prior history, some of the post-rats I hang out with have waggishly suggested that I should describe myself as a "pre-rat" or even an "ur-rat".

I'm opposed to doomerism. I think it's premises have been completely falsified by the way AI has actually developed, as systems without agency or goals in the world.

At minimum, I believe AI risk theory needs a fundamental rethink, which it's not getting because doomers having attachment to it resembling that of an apocalyptic religious cult.

I'm also opposed to effective altruism. The goal is noble, but I believe the movement is compromised by a fundamental misunderstanding of what the evolved impulse to charity is for.

This leads to some pathological overruns that are related to historical problems with utilitarianism as a philosophy. I'll probably write more about this in the future.
Read 4 tweets
Mar 30, 2024
1/ Since the level of public consternation about the xz back door seems to be increasing rather than decreasing, I want to point out something simple and important:
2/ Open source worked the way it's supposed to. Some hacker noticed something that made him curious, poked at it because hackers are like that, and because the code was open and availablwe for inpection, diagnosed the problem before any serious harm was done.
3/ If a PLA cracker had managed to insert that Trojan into some vital closed-source code it is far less likely that it would have been remedied so soon. In fact, what we should really be worrying about is inserted vulns in code we *cannot* inspect.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(