This is a 🧵collecting signs of the coming Recursively Self-Improving AI Apocalypse.
I've recently started to worry that the people supposed to be looking out for this stuff may be asleep at the wheel, so it's worth at least a twitter thread, you know, just in case.
The first "oh fuck" moment recently: GPT-f, using deep learning on automated theorem proving. In the words of the authors: "the first time a deep-learning based system has contributed proofs that were adopted by a formal mathematics community" arxiv.org/abs/2009.03393
Second such moment, Google using AI to improve its AI chips. Importantly, the resulting designs were very different, suggesting much room for improvement. If only they had some better TPU chips to do it on... theverge.com/2021/6/10/2252…
Tesla's Operation Vacation is about automating the entire process of self-improving a self-driving AI. Given the above, I don't see the issue with turning it on itself or the custom-designed DOJO chips that Tesla is working on as its answer to TPUs.
I would like the AI Safety community to stop being worried about the far future and start being worried about the near future. Given recent interactions, I'm not that optimistic they'll engage, but I'll keep adding WTFs on this thread as I find them, or as others reply here with.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I can target by all sorts of demographic characteristics: location, language, device, age, and gender. I can target by audience: interests, keywords, movies...🧵
I can literally make a list with the people I want to target, and buy ads to target them. So... what if I target myself?
Like, literally, make a list, add myself to it, and pay twitter top dollar to show me a white banner ad. Or a kitten or something. If that works, I would be paying twitter an amount to *not* show me ads, but replace them with neutral/awwww content. Almost like YouTube Premium.
So, not a doctor here, and I know nothing about Ivermectin. But I have a few questions for anyone willing to engage. I promise I won't press too hard, I appreciate anyone trying to honestly engage.
1. What would have to be true for 60 controlled trials, 30 of those randomized, no matter the size, to all be pointing in the same direction? What's our alternative explanation here?
2. What's the rationale for the FDA controlling use of a substance that's no more dangerous than certain kitchen spices? Forget effectiveness, given that we know it's very well tolerated, what's the rationale for not allowing people to make their own choices here?
Take people who are supposed to be our foremost independent thinkers, put them in a structure that suppresses independent thought, and let's see what happens.
I'm increasingly getting convinced that we need a truth-accumulation structure that is outside and beside what is called science today. And to get this out of the way, I've got a PhD and a double-digit H-index.
I started a startup rather than continuing in academia because it was clearly the better way to contribute to the advancement of knowledge. I just could not see how chasing grants and doing admin work, with a break for teaching, could possibly lead to new knowledge found
@garyblack00 I'm starting to believe you're not an honest broker. Reminder that I have challenged you to a $10k bet, loser donates to charity of winner's choice, about whether Tesla will have a demand problem if it doesn't advertise or do PR by 2025. Easy money, right?
Media should be free to title their articles whatever they want without constant second-guessing
Readers should be free not to have their limbic system hijacked with linguistic patterns that smuggle in unstated premises
The solution to the paradox? Headline Neutralizer (TM) 🧵
How does it work?
Headline Neutralizer (TM) is a non-existent service that applies preset linguistic transformation rules designed to retain explicit meaning (which media is liable for) while silencing implicit meaning (which media is not liable for)
Let's try an example:
The Independent recently published this headline:
"The Late Show viewers call out Jon Stewart for peddling ‘harmful’ lab leak coronavirus theory"
Let's apply some transformation rules and see what happens
VIDO-InterVac, the new employer of Angela Rasmussen, enforcer of the anti-lab-leak narrative, lists not one but 5 obviously China-linked organizations as funding contributors for 2019-20, their lastest report. Amounts not disclosed. Has this conflict of interest been declared?
Interestingly, they make it hard to get clarity. Their about>partners page is supremely uninformative but points to the annual reports. The last annual report simply has this alphabetic list on its last page, with no further info or amounts contributed. vido.org/assets/upload/…
As a reminder, Dr Rasmussen was previously at the Center of Infection and Immunity at Columbia University, the center run by another China-linked researcher and anti-lab-leak combatant, W. Ian Lipkin.