Roko.Eth Profile picture
Mar 30 11 tweets 3 min read Twitter logo Read on Twitter
Remember who was scared of covid early, correct about its risks and correct about what we should have done about it (shutting down all flights into Europe on that day in Jan 2020 would likely have saved hundreds of thousands of lives)
We know that #covid19 mortality was only high for a limited time: once we got to the omicron variant it dropped a lot, plus we developed vaccines and drugs like Paxlovid.

We actually only needed to hold out for 2 years, which was doable by shutting down travel.
There's absolutely no sense in accelerating into a crisis and doing more of the thing that's harmful. In Jan 2020, that thing was importing covid viral particles into The West on planes.

Within 12 months of the start, we had vaccines and paxlovid. And those vaccines could ...
... have been deployed to older people earlier. Scaling up vaccine production could have happened fast enough to protect the most important groups at some point in 2020.
Another thing that I think people are missing is the uncertainty aspect.

In early Jan 2020 when we got the first inklings of covid, we did not know how deadly it would be, and we also didn't know that it would basically only affect the old.
If you ban travel in Jan 2020, you can get to May 2020 and then decide that on balance it isn't that bad and you can deliberately infect the population. You could even do rolling vaccinations and infections so that no region gets infected before vaccinations.
By not acting, we gave up all those options and we just had to deal with the virus as it was which, as I predicted, would not be the end of the world.

Also, since some people are having reading comprehension problems: I am and was against "lockdowns" i.e. local daily life bans. Here's an article in March 2020 saying that:

lesswrong.com/posts/Ddgry4k6…
"we should probably take some extra casualties and just allow the virus to go through the population as quickly as possible, thereby avoiding months and months of costly lockdowns."
It's right there, people. ☝️

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Roko.Eth

Roko.Eth Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @RokoMijic

Mar 31
Various people including @michaelshermer and #ScottAaronson have asked how specifically advanced AI systems would cause human extinction, as if some incredible insight that we can't see right now is required.

However, I think that is wrong. Losing will be boring, actually. Image
Once you have technology for making optimizing systems that are smarter than human (by a lot), the threshold that those systems have to beat is beating the human-aligned superorganisms we currently have, like our governments, NGOs and militaries.
Lopsided military conflicts are boring. The Conquistadors didn't do anything magical to defeat the Aztecs, actually. They had a big advantage in disease resistance and in military tech like gunpowder, but everything they did was fundamentally normal - attacks, sieges, etc. Image
Read 25 tweets
Mar 30
My take on @ESYudkowsky's Time article:

It seems likely to me that a haphazard approach to building advanced AI will result in the full destruction of the human race, or something even worse than that. However there are different lethalities and I don't know which will happen.
There's a spectrum from:

"AI systems will hack our attention and our societies and continue to reduce fertility and social value until we die out"

to

"AI will hack physics and initiate vacuum decay and annihilate the entire universe in a nanosecond"
Where will our destruction fall on this spectrum? I don't know.

So, what should we do?

We should certainly be spending a lot more effort on both working out how things might go wrong and how we can solve those problems.
Read 6 tweets
Mar 30
"If we actually do this, we are all going to die"

time.com/6266923/ai-eli…
"Here’s what would actually need to be done:

The moratorium on new large training runs needs to be indefinite and worldwide. "
"If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth."
Read 6 tweets
Mar 29
GPT-4-early (i.e. without RLHF) is completely amoral apparently. It even suggested targeted assassinations as a way to slow down another company's AI work.

I wonder whether we will get to the stage where some company X starts taking recommendations like that seriously.
We are at the bizarre stage where half the people are telling me there's nothing to worry about and the other half are telling me to delete my own posts about what AI has already done because the mere idea of it is so dangerous

The problem is not that GPT-4-early is suggesting the mere idea of doing bad things. It's that future systems, systems that are very capable, will both suggest the bad thing and also realistic ways to carry out that bad thing.

Read 9 tweets
Mar 21
Perhaps this is the strongest e/acc argument: we need to "dissipate the alpha", i.e. prevent any one human group from having a monopoly on AI tech, so there isn't one group that can do anything bad like destroy the world.
The problem with this is it pushes these orgs into a molochian game that they are trapped in. They are pressured to do absolutely anything they can to stay ahead.

With nobody having any alpha, You can end up in the "AI as a curse" equilibrium where many dozens of orgs ...
... all have a tech which they are 99% certain will destroy the world, but they deploy it anyway because if they don't someone else will, and there's still that 1% chance that they get to win the lightcone if it works.
Read 11 tweets
Mar 21
Just had a great discussion with @algekalipso about AI alignment

Some ideas:

- "clean code" optimization using gradient descent, L1 norm, compute some complexity function on the computational graph to minimize, result might be "simple" circuits
- prompt-as-hyperparameter using particle swarm optimization and self-directed changes w/a dataset ("change your prompt to fix this mistake")
- correct bad human thought patterns using compressed less wrong content as prompt
- committed neural networks with ZK-proofs (e.g. proof that network N produces output O on input I), so that we can make machines which can publish a continual record of commitments that show they're never even thinking a hostile thought
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(