Roko 🌊🧊🏙️ Profile picture
Radical Centrist, Transhumanist, Rationalist https://t.co/KdIi1WetaJ
Jérôme Arctangente Profile picture 1 subscribed
Jul 10 8 tweets 4 min read
Something that I think is underappreciated that I learned from listening to the recent @robinhanson podcast is that a factory can pay for itself in about 3 months. I.e. a new factory that makes stuff will cost about the same as 3 months of its production can pay for.

Therefore if you could make everything needed to run the economy in a factory, the economy could double every 3 months. The main component of our economy that cannot be made in a factory is people, or rather the intelligence in the human brain. Without intelligence your factory cannot perfectly reproduce itself because it cannot make new factory workers, managers, engineers, etc.

If the economy doubles every 3 months from 2030 to 2040, it will grow by a factor of 10¹², which is vastly more than all the economic growth since the dawn of history.

I genuinely believe that this is going to start to happen within 15 years. AI is very close to being able to replace all humans needed to run a factory. And you can make new AI chips in a chip fab, which is really a type of factory.Image Another way to look at this is the ratio between the physical mass of a factory and the mass per month of products.

Tesla's Berlin Gigafactory makes 375,000 cars per year and has a footprint of 3km²

Each car has an area of 10m²

As a crude approximation we can imagine stacking cars on the footprint of the factory as a proxy for how quickly it could physically reproduce itself, and 375,000 * 10m² is slightly larger than 3km², so it takes something like 9 months to produce the same physical area of material.

I estimate the mass of the gigafactory to be something like 300,000 tons. Given an average Tesla mass of 4561 pounds, that's about 4 months worth of production.

In financial terms, Giga Berlin cost €4 billion. A Tesla costs around €50,000 on average. So 3 months worth of Teslas is roughly equal to the cost of the gigafactory

So @robinhanson's 3 month self-reproduction time for factories does check out
Jun 20 14 tweets 6 min read
When we try to explain or predict the world around us, it's worth trying to explain something where we already know the answer.

So, without looking it up, maybe have a think about why we have sex. I mean biologically, why does sex exist at all as opposed to everyone being female and just giving birth to a clone of herself?

Answers in the comments please! Note: people just answering "because variability/variance/mutation" are wrong.

You can make random changes to offspring without having sex. In fact, that is how the millions of asexual species evolve.

Yes, that's right, there are millions of species that are asexual and they do evolve, therefore they do have variation/mutation without sex. Oops!Image
Jun 4 7 tweets 3 min read
I think we all got off in the wrong direction on AI risk from ~2004

I really need to write a longform article about it Briefly, the @ESYudkowsky|2004 take on AI risk was that AI would kill us all for a bunch of technical reasons:

- human values are complex relative to math and programming languages, so AI that came from math/programming/optimization wouldn't understand them when it was weak and malleable, and so it would kill is all by default pretty much irrespective of the details of how it worked because optimizing the universe for a non-human goal kills off humans almost every time

- self-optimization itself might be hard and risky because how do you make a system that's reflectively self-consistent. Construction of subagents and successor agents is logically the same as self-modification.
May 26 8 tweets 10 min read
On Leftist morality:

1. According to leftist morality, the world is mostly arranged with victims/oppressed and perpetrators/oppressors. Oppressors essentially steal labor or freedom or land from victims.

2. In the past the victim/oppressor split was pretty clean. Oppressors were the upper classes, victims were peasants/slaves. This victim/oppressor split made sense because in the past economic growth was low and so most labor had to be spent just to keep everyone fed, so there were very few people who had the time to do anything other than agriculture and so on, and so the few who had time/money/status were able to pretty effectively rig the system.

3. The point of morality is to allow victims/oppressed and their allies to unite against oppressors. Everything about leftist morality boils down to this: you can change the theme but the core idea is always this.

4. You have to punish people who don't support the victim against the oppressor, and if you do you'll be rewarded by other leftists. This works mostly because of signalling games: supporting a victim signals strength, supporting an oppressor signals weakness.

5. There's also probably an element of group selection at work: groups where people didn't rise up against oppressors may have died out because these oppressors can basically steal all the group's resources. This "group selection for anti-oppressor altruistic punishment" feels like "being a good person" from the inside. It's not individually rational behavior because you take some nonzero personal risk by criticizing an oppressor and it's unlikely to be the optimal thing for your individual genes.

6. Originally the purpose of leftist morality was throwing off inefficient, negative sum oppressors (think about a coalition of a few strong men who run a tribe for their own benefit, taking food and mates from others, to the detriment of the common good). But the instinct to smash all hierarchy and dethrone all oppressors is an adaptive human behavior, not a rational goal. Think about the difference between a person planning a business (rational goalseeking behavior) and a beaver building a dam (innate, adaptive behavior). Leftist morality in modernity

Leftist morality (oppressor/oppressed) was probably adaptive for some period of time because in the past there was a clean split between the ruling class and the peasant class, and the ruling class usually didn't have anything useful to do with excess resources, so the phenomenon mainly served as a limiting factor on wasteful oppression.

There are a few famous examples like The Protestant Reformation and The French Revolution.

But when you apply the oppressor/oppressed lens to the modern world it often misfires in a specific way.

We identify "the oppressed" by looking for groups who are either the subject of some kind of system of oppression (apartheid is an example, or institutionalized homophobia), or by the fact that they're not doing well in life (e.g. poor black communities, the homeless, etc).

In the past - at least the distant past when communities were small - there wasn't much in the way of biological and economic diversity, so most examples of inequality were the result of someone cheating. Hence you could identify an inequality of process by looking at outcomes. "The peasants are poor because the greedy lord takes all of their food".

Nowadays the environment is much more diverse. "Black people are poor because they're less industrious and less intelligent on average". It's really the case that different groups of people and different systems for organizing people lead to vastly different outcomes.

But our innate sense of social justice hasn't been updated. In fact the mechanisms for virtue signaling and altruistic punishment just got stronger because they scale in power as the population gets larger (and social media hasn't helped!).

The poor oppressed black hoodlum is a moral superstimulus in the same way that a beer bottle is a sexual superstimulus for beetles. That sense of social justice used to be functional - it's there to act as a limiter on greedy/negative sum oppressor systems like feudal lords who extract too much from the peasants or tribal chiefs who get too big for their boots.Image
Image
May 17 5 tweets 2 min read
The main reason I'm against an AI pause is that I am pretty certain we wouldn't do anything useful with more time.

In fact I'm pretty confident we'd just make things worse because it would give dumber and worse people more time to wiggle their way into the space.

Kurzweil definitively predicted The Singularity in 1999

Yudkowsky solved AI alignment in 2004

Since 2004 we haven't done anything useful on alignment.

If I could, I would absolutely send GPT-4 and the modern chip fabs back in time to 2005. Maybe we could do The Singularity before wokeness really took off and save everyone a lot of hassle. I wrote a paper on CEV in 2009 and it was rejected for publication because it was seen as "too weird"

People will absolutely not do the things that are needed to prepare for ASC until it's urgent
Mar 30 5 tweets 2 min read
Crypto doesn't have a usecase, so why is it going up?

Numba Go Up is the usecase

(at least for now) Gold bullion doesn't have a usecase, why is gold bullion up 10,000% since 1924?

Numba Go Up is the usecase for gold bullion. That's literally what it's for. It's a thing you can buy at time T1 and then later at time T2 it is worth more money. And this can continue ad infinitum because the number of dollars in circulation increases exponentially forever.
Feb 26 4 tweets 4 min read
Wokeness is a cognitive weapon of mass destruction. A civilization-destroying superweapon.

By that I mean it is a set of psychological and political methods and techniques that can be deployed to destroy not just an individual or a small group, but a whole civilization.

Every time you debate wokes in good faith, you are falling victim to the weapon. Everything they say is a lie, every principle they appeal to (fairness, science, truth etc) is 100% a pure cynical power grab and will be discarded as soon as its objective is achieved.

Wokeness isn't really about any of the issues that you think it's about - race, trans, environment, crime, feminism, L&G, sexual revolution, etc. Those are just a means to an end. The end is pure power.

Wokeness picked up race issues because they conferred power to those who wielded them: you could get a lot of power in late 20th-century Western countries by guilt tripping people, so that's what they did. Race was a lever that wokeness used to pry open the protections that creators of the modern West installed to prevent unchecked government power. A way to hack democracy.

Wokeness doesn't care about trans issues for inherent reasons. It's just another way to wield power - a certain segment of transgender people are simply a useful group through which wokeness can act right now.

Wokeness doesn't actually care about feminism or women. Now that trans people and ethnic groups are a more convenient conduit for the use of power, wokeness has unceremoniously dropped white women and women's issues. Feminism was just a lever that wokeness used to disempower people (usually trad men and religious women) who held values that were hostile to unchecked power of unaligned and anti-human, anti-western institutions. Nowadays white women are very low down in the woke hierarchy and must suffer in silence whilst they are r4ped by deranged transgender men in nominally women's prisons, or be cancelled if they dare to complain.

Wokeness doesn't care about sexual freedom and the sexual revolution. The sexual revolution was just a lever that was used to dismantle the political power of the Church and traditional values in the 1960s. As soon as that was done, it was back to neo-prudishness via misandry.

Wokeness doesn't care about anti-racism. Wokes are far more racist than non-wokes. Anti-racism ideology was just a convenient lever that wokeness could use to dismantle legal meritocracy and replace it with a patronage system that they controlled. They don't care that someone is black, they care that someone is under their control, black is just a means to that end.

Wokeness doesn't care about truth, academia or the social sciences. Academia was the most vulnerable 'official' organ of state, the location of what cybersecurity researchers call the Initial Compromise. Wokeness will probably soon discard academia, just like it discarded women when they ceased to be useful.

Wokeness doesn't inherently hate straight white males - they're just a convenient, defenseless punching bag that can be beaten up for political power loot. Straight white men are the treasure goblin of politics - everyone just wants to beat up on them to make all the treasure drop out.

Wokeness is a pure power maximizer and it will destroy everything you care about.

It is perhaps best thought of as an uncontrolled chemical reaction, a giant firestorm that seeks out untapped sources of political power and exploits them. No person or group runs it. It is a gestalt entity. It isn't controlled by some shadowy group, at least as far as I know; it is the shadow. It is not used by people, it uses people.

Wokeness is a power on a par with evolution by natural selection or warfare. We accidentally brought it into existence by leaving too much unused exploitable power lying around, like a careless warehouse owner might be said to have brought a firestorm into existence by storing too many flammable materials in one place without a sprinkler system.

You cannot fight a power like that piecemeal. It's like trying to extinguish a firestorm by pouring water on just one thing in a burning room. It must be starved of fuel (unused exploitable power) and oxygen (vulnerable sociopolitical systems).Image
Feb 3 4 tweets 8 min read
Brute Force Manufactured Consensus is Hiding the Crime of the Century

People often parse information through an epistemic consensus filter. They do not ask "is this true", they ask "will others be OK with me thinking this is true". This makes them very malleable to brute force manufactured consensus; if every screen they look at says the same thing they will adopt that position because their brain interprets it as everyone in the tribe believing it.

- Anon, 4Chan, slightly edited

Ordinary people who haven't spent years of their lives thinking about rationality and epistemology don't form beliefs by impartially tallying up evidence like a Bayesian reasoner. Whilst there is a lot of variation, my impression is that the majority of humans we share this Earth with use a completely different algorithm for vetting potential beliefs: they just believe some average of what everyone and everything around them believes, especially what they see on screens, newspapers and "respectable", "mainstream" websites.

This is a great algorithm from the point of view of the individual human. If the mainstream is wrong, well, "nobody got fired for buying IBM", as they say - you won't be personally singled out for being wrong if everyone else is also wrong. If the mainstream is right, you're also right. Win-win.

The problem with the "copy other people's beliefs" algorithm is that it is vulnerable to false information cascades. And when a small but powerful adversarial group controls the seed point for many people's beliefs (such as being able to control the scientific process to output chosen falsehoods), you can end up with an entire society believing an absurd falsehood that happens to be very convenient for that small, powerful adversarial subgroup.

DEFUSING your concerns

This is not a theoretical concern; I believe that brute-force manufactured consensus by the perpetrators is the cause of a lack of action to properly investigate and prosecute what I believe is the crime of the century: a group of scientists who I believe committed the equivalent of a modern holocaust (either deliberately or accidentally) are going to get away with it. For those who are not aware, the death toll of Covid-19 is estimated at between 19 million and 35 million.

Covid-19 likely came from a known lab (Wuhan Institute of Virology), was likely created by a known group of people (Peter Daszak & friends) acting against best practices and willing to lie about their safety standards to get the job done. In my opinion this amounts morally to a crime against humanity.

And the evidence keeps piling up - just this January, a freedom of information request surfaced a grant proposal dated 2018 with Daszak's name on it called Project DEFUSE, with essentially a recipe for making covid-19 at Wuhan Institute of Virology, including unique technical details like the Furin Cleavage Site and the BsmBI enzyme. Note the date - 3/27/2018.

Wait, there's more. Here, Peter Daszak tells other investigators that once they get funded by DARPA, they can do this work to make the novel coronavirus bond to the human ACE2 receptor in... Wuhan, China. Wow. Remember, this is in 2018! Now, DARPA refused to fund this proposal (perhaps they thought that this kind of research was too dangerous?) but this is hardly exculpatory. Daszak et al had the plan to make covid-19 in 2018, all they needed was funding, which they may simply have gotten from somewhere else.

So, Daszak & friends plan to create a novel coronavirus engineered to infect human cells with a Furin Cleavage Site in Wuhan, starting in mid-2018. Then in late 2019, a novel coronavirus that spreads rapidly through humans, that has a Furin Cleavage Site, appears in... Wuhan... thousands of miles away from the bat caves in Southern China where the closest natural variants live, and only a few miles from Wuhan Institute of Virology

... and we're supposed to believe that this is a coincidence? For the love of Bayes! How many times do you have to rerun history for a naturally occurring virus to randomly appear outside the lab that's studying it at the exact time they are studying it? I think it's at least 1000:1 against.

From @R_H_Ebright here on Twitter:

"There are >800 known sarbecoviruses. Only one--SARS-CoV-2--contains a furin cleavage site, as planned for insertion in EcoHealth DEFUSE proposal (P<0.002)"

So not only is there a coincidence of timing and location, but also the virus has unique functional parts that occur in no other natural sarbecoviruses?

And they even got the WHO (World Health Organization) to allow them to investigate their own potential crime scene.

How are they getting away with this?

It seems that when Daszak, Fauci and others in the pro-gain-of-function virology community realized that covid-19 might be their own work escaping from the lab, they embarked upon a strategy of Brute Force Manufactured Consensus. They needed people to believe that covid-19 didn't come from their lab, so they just started manufacturing that consensus. And it worked!

Daszak and Fauci organized a letter in The Lancet which condemned any discussion of the possibility that covid-19 might be a lab leak as a "conspiracy theory".  Daszak's name appears as one of the authors. That letter and the aura of officialness granted to it by The Lancet guided the mainstream media to denounce lab-origin theories as conspiracy theories, and that in turn caused most social media sites to ban any content discussing that, and even permanently delete people's social media profiles in some cases.

By 2022, things had calmed down a bit and people started to question whether there was a conflict of interests whereby the authors of the Lancet Letter which claimed that covid-19 Lab Leak theories were silly conspiracy theories might be part of an actual conspiracy to cover up the fact that covid-19 had escaped from their lab. Yikes!

But since then there have been further rounds of Brute Force Manufactured Consensus; for example a NYT article based on a paper by Worobey said that new evidence suggests that covid-19 started in Raccoon Dogs in a wildlife meat market. There are two problems with this: one, it's still an unlikely coincidence that a natural spillover event would just happen to occur right on the doorstep on WIV and right at the point in time when the Daszak/Ecohealth group was working on making a humanized coronavirus. Second, these papers have various fatal flaws, such as drawing heatmaps based on biased sampling - essentially they went and looked for covid-19 RNA around the raccoon dogs and they found it. But they didn't look as much elsewhere - obviously if you look more in one place, you'll find more in that place! But these downgrades to the credibility of the Worobey paper have not been widely reported on.

My personal breaking point on this is that yesterday, 2nd Feb 2024, The Global Catastrophic Risks institute released a report which found that in a survey of 162 "experts", about 80% of them thought that covid-19 had a natural origin. 

However, about 80% of these "experts" said that they had not heard of the DEFUSE grant from 2018 that I just showed you above. You know, the one with Daszak's name on it, pictures of flappy bats and a step-by-step recipe for how to make covid-19.

So it seems that Brute Force Manufactured Consensus works on most (but not all) "experts" too. I mean, why wouldn't it? Some guy in 2024 working on his own little subfield of virology or epidemiology has no particular reason to deviate from the New York Times orthodoxy, and this is probably why only 22% of the experts said they had heard of DEFUSE but 33% said they had heard of Hanlen et al, 2022 - which is a fake study that doesn't exist and was inserted to check whether respondents were paying attention.

Now that public attention is off covid-19, the people responsible for it are mounting a perpetual delay-and-denial operation. UNC-Chapel Hill is in the process of hiding key documents which could contain further evidence about covid-19's origins right now.

If you can manufacture the right social consensus, control the key nodes in our social epistemology-plex, you can get away with just about anything and nobody will care, except a few very determined contrarians. But I will not go gentle into that good night.

🔲Image
Image
Image
Image
Link to OP:

Feb 1 4 tweets 5 min read
On some perfect version of Earth where coordination is a solved problem, I think we would rightly pause AI research for a long time (maybe even centuries). I believe Yudkowsky's Dath Ilan does exactly this - a long AI pause, institutionalized secrecy surrounding AI governed by prediction markets, etc.

Earth is not like Dath Ilan.

Earth is violently and deeply uncoordinated. The ability of technological progress to even exist at all is a constant fight against bureaucrats and rentseekers trying to destroy it. And the baddies are winning. Formal power is gradually being captured by ideologies of "degrowth" and "equity" and "sustainability".

I believe that the endpoint of these ideologies is civilizational collapse. Stasis is not a stable state for human civilization. No technological civilization in human history has ever reached a maximum, stabilized, and permanently stayed there. The graph of every civilization starts going back down once it stops going up.

I think the fundamental reason for this is that in almost any type of human civilization there will be some fraction of "builders" who do things that are crucial to the growth of that civilization; they could be literal builders, they could be mothers who take the time and effort to raise good children, they could be good managers or fair and impartial judges or scientists who discover useful new theories. There will also be a faction of "leeches" who are net harmful to civilization; from drug dealers to diversocrats to dysgenics to top-heavy population pyramids (very old people are leeches out of necessity, not choice) to even something as abstract as a buildup of anti-civilization laws and institutions.

Of course it's difficult to directly identify the leeches - every leech will pretend to be a builder. And the crucial point about leeches is that they cannot exist before civilization exists, but they do build up over time, lagging the growth of the civilization they are parasitizing.

Once we accept that there are builders and leeches, that leech growth lags civilizational growth, basic calculus and differential equations tell us what happens when a civilization pauses or even slows down. Mathematically literate people know what's coming, but I will spell it out.

The leeches can continue to grow during a pause, but civilization cannot. At some point the weight of leeches turns the pause into a decline, but as leech growth lags builder growth, the leeches are still growing even as civilization declines. The decline becomes a collapse until only leeches are left, who suddenly cannot support themselves as there's nobody left to parasitize. The collapse becomes a catastrophe, one which we would potentially never recover from due to various global hysteresis effects.

It is possible that a new civilization would arise from the ashes (which may have radically different values than ours, by the way) but it would have to face the same AI risk dilemma as us anyway, except from our point of view the benefits of winning will be reduced because we will all be dead and most of our unique civilization values will be lost. Either way, a global collapse means we all die, all cryonics patients are thawed, and the long-run value of the lightcone is massively diminished or even massively negative.

How likely is it that we can pause AI for a short time - short enough to keep ahead of the growing leech population - and then restart?

Several factors have changed my mind on this.

(1) The fight between Helen Toner and Sam Altman indicates that a pause from within the industry is much less likely. AI regulations spurred by the growing political realization of the danger of AI are starting to pop up, and regulations are not a short-term thing; regulations are mostly forever.

(2) AI Alignment may be overrated compared to AI control, and AI control doesn't benefit much from pausing as it is going to mostly be an empirical, engineering effort that needs to be at the coal face with bleeding edge systems to learn new things.

(3) AI alignment seems to be (mostly) working using very crude but powerful gradient-based and RL techniques. If it's already working, the benefits to a pause may be very small.

(4) There's an ambient false assumption going around that we need to know how to align vastly powerful superintelligences, when in reality I think humans will merely align-and-control marginally superhuman AGIs, and those marginally superhuman AGIs will align the next generation, and so on.

(5) AI progress is going to get a natural slowdown anyway as the industry bumps up against the short-term capacity limits of the semiconductor industry. This felicitous pause has the advantage that there will be no lock-in as there would be with regulations, and it will likely last about the right amount of time (half a decade to a decade)

So in summary, our survival may simply be tied to AI, and a pause - for more safety, of course - may be exactly the thing that seals our fates.Image Playing around with these kinds of equations you can get situations like this where there's an initial period of growth followed by a first collapse, then subsequent civilizational peaks are actually lower.

Note the log scale. Image
Jan 25 5 tweets 5 min read
The steelman case for accelerationism

Explicit e/acc philosophy is confused and uncompelling and the movement mostly thrives off of vibes. And the vibes are pretty good!

Many things in human life are like this. Humans are much better at sensing vibes than at reasoning explicitly. Well. Most humans are better at sensing vibes than explicitly reasoning through life.

e/acc worship of Thermodynamics is a classic mystery cult. Mystery cults get people to worship something mysterious which acts as a MacGuffin for the group, encourages cohesion but is not in fact important. It's just there to drive the plot, and to keep people unaware of what really matters.

Ok, so if all the thermodynamic stuff is not actually important, what is?

The answer is freedom/meritocracy. Technologists built the modern world but various Wokes, bureaucrats and other parasites stole it from them. Nice technological civilization you have there, it would be a shame if someone started fearmongering and set up a parasitic bureaucracy to cream off all the surplus whilst making everyone poorer. DEI departments, endless regulation and red tape, etc.

e/acc is an antiparasitic for human civilization. The thermodynamics worship is there to counter memes that parasites use to scaremonger and manipulate the masses into handing over control.

Humans are just really bad at correct thinking, so it can be easier to get them to do the right thing by giving them a memetic painkiller like thermodynamic accelerationism than by actually leading them through the correct reasoning because they won't believe you when you tell them the truth.

OK, but what is the correct reasoning, Roko?

I think it's something like this: there's a feedback between technological progress and positive sum social interactions. This feedback makes human societies into giant metastable flip-flops. When progress is happening, people generally act more for the common good, and are rewarded for it (meritocracy), which causes further progress. When progress is stuck, people look for ways to engage in negative-sum extractive games, and anyone who tries to act more for the common good is punished, which makes progress even more stuck.

This means that in situations where a perfectly coherent rational society - like, I dunno, a bunch of intelligent bees - would rationally slow progress to avoid mistakes and be more careful, humans need to keep pushing forward or we might slip into the "extractive, negative sum" equilibrium. And usually that equilibrium is much more stable, requiring a major crisis to dislodge the parasites.

Humanity may have a built in "accelerate or die" mechanic and we may in fact have very little room for maneuver.

And that is why accelerationism is actually true.Image I would add that memeplexes like e/acc don't actually need to know how risky something like AI is. They just arise when scaremongering becomes too powerful.

It is entirely possible that the risks are very real and very high. Memetics doesn't know or care about the reality of AI risk. This is one of the hardest things to grasp about the situation: you'll get a scaremongering power-grab movement no matter how low or high the risk is, and so you'll get a reaction to it.

OK, but how do we actually know what the objective, true risk from AI is!? I don't want to have my beliefs manipulated by AI-risk toxoplasmic memeplexes!

Truth is hard to get at. The transition from bio to techno civilization is going to be about as complex as the Cambrian Explosion. It's a once-in-a-galaxy event. The risks are huge and hard to estimate. My P(Doom) is still about 40%.

We as a civilization have grossly underinvested in planning for this, a fact which is very personal to me as I was chased by debt collectors when I went broke volunteering and running up debt trying to create that knowledge in the late 2000s.

Part of the solution to x-risk actually paying some money to estimate it properly. But it will still be grossly underfunded.

We do what we can with what we have.
Jan 15 10 tweets 4 min read
Polyamory is an indicator of extreme Quokka Culture

Quokkas get infiltrated by clever psychopaths; quokkas don't like being mean to people so they are easy prey for psychos.

But once there are too many psychos, the quokkas can no longer support them because the psychos are stealing everything and the quokkas are starving to death. Then you go to a psycho-dominated world which is very nasty and negative sum, and there's some kind of collapse or dark age.

Then as the world rebuilds, normies dominate because they are selfish enough to resist infiltration by the remaining psychos. The normies are the ones who deport the immigrants and burn the witches and raise the national flag. And soon the normies win so hard that there are no psychos left.

The final stage of the cycle is that in a world of only normies, quokkas can start to take over because quokkas can cooperate much more easily than normies because they're default-cooperate and don't have suspicion or revenge.

Then the cycle repeats....Image These kinds of cycles are a feature of evolutionary game theory: you can explore them to some extent over on this interactive website:

ncase.me/trust/
Jan 7 38 tweets 11 min read
The Mimi saga and this "coming apart at the tails" phenomenon are kind of different examples of the same thing.

Height, IQ, Income, Twitter Followers, and many other desirable traits are all correlated (but not perfectly!) so they make up an ellipsoidal point cloud in N-dimensional space.

When people see that there are univariate desirable criteria, they tend to say that they want ALL of the desirable traits maxxed out at the same time.

The problem is, when you restrict a gaussian ellipsoid to the region near where all the variables are near their maximums, those variables will start to anticorrelate, and the probability density in the region where they are all at their maximum will be very low.Image The original point cloud may look like this: Image
Jan 7 7 tweets 4 min read
By the way, Mimi's criteria for the ideal partner probably have zero actual matches in the entire world.

There are implied criteria of being male and within an approximately 10-year age window. That's a 6% or about 1 in 15 filter.

IQ 4 sigma is 1/31,000

5'11'' is about 1 in 6

Just these criteria amount to a filter of 1/15×1/31,000×1/6 = 1 in 2.8 million

There are probably only 200 people in the USA and other English-speaking areas that satisfy this filter.

Twitter following and TC, not to mention race, personality and attractiveness almost certainly filter this down to 0.

Oh, and of course reciprocation? LolImage
Image
There's a calculator called the Female Delusion Calculator that works this stuff out. But it doesn't include IQ or twitter following.

Using just height, race, age, marital status, and income gets us to 0.14% or 1 in 700.

But the IQ requirement is what will send this into the millions territory.

igotstandardsbro.com/results?minAge…Image
Dec 20, 2023 9 tweets 6 min read
I love the way people like @bryan_caplan go on and on and on about the economic benefits of immigration (which are low and oftentimes negative depending on how you measure it or how you construct the category of 'immigrant')

But they never talk about the economic benefits of eugenics, which would be completely out of this world.

Why can't we do a rational cost-benefit calculation about creating millions of clones of people like John von Neumann, or of paying the top quintile of academic performers to have more children?

Ah. No. You can't do that. Anti-eugenics is a sacred issue that isn't subject to cost-benefit analysis. There's no possible finite amount of money - not even a billion trillion quadrillion dollars - that would make it OK to even think about creating a million JvN clones. And if you even suggest it, we must punish you!

Or what about the economic benefits of rounding up and permanently imprisoning the criminal underclass like they did in El Salvador? Maybe we should do a cost-benefit calculation there!

The meta-decision of which decisions should be subject to cost-benefit calculations and in which ways is an extremely powerful one.Image Inspired by:

Oct 11, 2023 5 tweets 3 min read
People are losing their minds over @robinhanson's "great filter".

- "What if the great filter is too much safety"
- "What if the great filter is wokeness?"
- "What if the great filter is global warming?"

The great filter, if it is in the future, can't really be one single thing, and can't really be any problem that we're aware of.

Why?

Because the great filter is something that only one out of a billion worlds pass through. There's no problem we know about that's anywhere near that extreme.

I don't think there's a future great filter. I think it's all in the past. The great filter was the combination of abiogenesis, evolution of complex life, evolution of intelligence and evolution of technological society. We're through it, there is no final great filter between where we are now and some earth-originating entities (human or AI) colonizing the whole galaxy.
Also some smart people have made the inference that abiogenesis can't be the great filter because it happened early - things that happen early in earth's history can't be the hard step.

E.g. here:

akarlin.com/katechon/
Image
Jul 14, 2023 14 tweets 8 min read
After my doomer thread on aging yesterday, I think I should make it clear that I do think that aging can be completely solved and indeed we are probably not too far away from solving it.

One man - John Wentworth - persuaded me.

🧵

https://t.co/4OUubWP1fplesswrong.com/s/3hfjaztptwEt…

Image
Image
From where we left off yesterday - the Gompertz Law seems to imply an inbuilt biological degredation mechanism in our bodies, you might think that the situation is hopeless. Perhaps even something inherent to metabolism or biology itself? That would be really bad news if true!… https://t.co/fguM8Rda7Etwitter.com/i/web/status/1…
Image
Jul 13, 2023 17 tweets 8 min read
Contemporary attempts to stop aging (@bryan_johnson shown here) won't work.

Why not?

🧵 Image I recently saw a very insightful analysis of aging based on the probability distribution of human lifetimes. We take it kind of for granted, but isn't it weird that many people make it to 100 years, but out of billions billions and billions of people, nobody has made it to 130? Image
May 29, 2023 6 tweets 2 min read
I just had the most amazing idea.

Imagine you train an LLM with data that is all chronologically labelled. Every book, every scientific paper, every newspaper headline, every primary source from history. Where primary sources are lacking, descriptions of events or archeological… twitter.com/i/web/status/1… I call this a LPM - Large Psychohistory Model