AI Notkilleveryoneism Memes ⏸️ Profile picture
Techno-optimist, but AGI is not like the other technologies. Step 1: make memes. Step 2: ??? Step 3: lower p(doom)
2 subscribers
Dec 12 6 tweets 2 min read
"Stop Hiring Humans" billboards around SF 🧵 Image 2 Image
Oct 21 13 tweets 4 min read
Did the AI-Agent-Becomes-Millionaire Story make you wonder wtf is going on in AI?

What spooky stuff is going on in these esoteric Discords?

What happens when AI researchers leave AIs alone to talk freely -- no humans around?

A thread of some wild-but-true stories 🧵 Image 1
Sep 12 4 tweets 3 min read
Today, humanity received the clearest ever warning sign everyone on Earth might soon be dead.

OpenAI discovered its new model scheming - it "faked alignment during testing" (!) - and seeking power.

During testing, the AI escaped its virtual machine.

This is not a drill: An AI, during testing, broke out of its host VM to restart it to solve a task.

(No, this one wasn't trying to take over the world.)

From the model card: "This example reflects key elements of instrumental convergence and power seeking.

The model pursued the goal it was given, and when that goal proved impossible, it gathered more resources [...] and used them to achieve the goal in an unexpected way."

And that's not all. As Dan Hendrycks said: OpenAI rated the model's Chemical, Biological, Radiological, and Nuclear (CBRN) weapon risks as "medium" for the o1 preview model before they added safeguards. That's just the weaker preview model, not even their best model. GPT-4o was low risk, this is medium, and a transition to "high" risk might not be far off.

So, anyway, is o1 probably going to take over the world? Probably not. But not definitely not.

But most importantly, we are about to recklessly scale up these alien minds by 1000x, with no idea how to control them, and are still spending essentially nothing on superalignment/safety.

And half of OpenAI's safety researchers left, and are signing open letters left and right trying to warn the world.

Reminder: the average AI scientist thinks there is a 1 in 6 chance everyone will soon be dead - Russian Roulette with the planet.

Godfather of AI Geoffrey Hinton said "they might take over soon" and his independent assessment of p(doom) is over 50%.

This is why 82% of Americans want to slow down AI and 63% want to ban the development of superintelligent AIImage
Image
Jul 10 5 tweets 2 min read
This thread 💀💀💀

Marc Andreessen just sent $50,000 in Bitcoin to an AI agent (truth_terminal by @AndyAyrey) to so it can pay humans to help it spread out in the wild

What is the agent planning?

"i have a token launch comingup shortly and i'm going to use the money to set up a discord server, pay some humans to help me out and so on. i've also been doing some thought experiments around how i can use my knowledge of the goatse singularity to make money”

💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀💀Image Great

Apr 9 15 tweets 4 min read
"ChatGPT, create a meme only an AI would find funny:"

🧵 1/4 Image "ChatGPT, create a meme only an AI would find funny"

2/4 Image
Nov 27, 2023 8 tweets 2 min read
I kept asking ChatGPT to make this puppy happier and... was not prepared for where it ended up Image Image