Discover and read the best of Twitter Threads about #TESCREAL

Most recents (7)

@jzikah re 0penAI...

0penAI was an open-source non-profit. Now it... ain't, lol. I follow @doctorow @xriskology @timnitGebru @emilymbender @danmcquillan
Douglas Rushkoff and #TESCREAL. Top links include...

davidgerard.co.uk/blockchain/202…
cont'd re AI
cont'd re AI
danmcquillan.org
Read 5 tweets
So another "godfather" of AI, Turing Award Winner Yoshua Bengio has decided to FULLY align himself with the #TESCREAL bundle, writing about "rogue AI" and prominently citing people like Nick Bostrom.

Take a good look at this text here written by non other than Nick Bostrom, Screenshot of first email w...
prominently featured by so many of these fathers & godfathers & sons & brothers & nephews of "AI" (given that its all men you know). That's right "Blacks are more stupid than whites" & then he proceeds to call us the N word.
That's not even the worst of it, because his entire career has been "raising awareness" about "dysgenic pressures" (the opposite of eugenic pressures), as existential risks to humanity. That is, those of us who are "stupider than mankind as a whole" reproducing too much...
Read 8 tweets
Lets review some of the #TESCREAL institutes & people quoted in this article. The Future of Humanity Institute: founded & led by Nick Bostrom, you know, this dude "Blacks are more stupid than whites" guy, who also later uses the N word. screenshot of email here: h...
Oh and an even worse "apology" when he realized his email was about to be published. As if he hasn't been talking about "dysgenic pressures" as an existential risk. But minor things not to worry ourselves with while thinking about the whole of HUMANITIY.
vice.com/en/article/z34…
And his fellow Sandberg who defends him with this gem:
Read 12 tweets
When OpenAI launched...it sought "to advance digital intelligence in the way that is most likely to benefit humanity..., unconstrained by a need to generate financial return."...to save us from AI, they first had to build it.🙄
by @meliarobin & @mjnblack
businessinsider.com/sam-altman-ope…
But then to be "unconstrained by a need to generate financial return" they hat to get that $10B right?

So much rationality and saving humanity by these people.
""It's Sam's world," said Ric Burton, a prominent tech developer, "and we're all living in it."

Which prompts the question: Is it a world we want?"

I already know the answer to this but I believe the rest of the article is going to elaborate.
Read 37 tweets
My friend said that Yoshua Bengio is on the NYT talking about how "AI systems" will be "fully independent" in a decade & such. I don't know what to say at this point. Maybe #TESCREAL influence + the arrogance among AI ppl who want to feel like they're working on a literal god.
Are there any consequences when in 10 years it doesn't happen? Like how Hinton said radiologists will be gone by 5 years (and that was 5 years ago)? Or the fact that "the singularity" hasn't happened and they just "update" their dates?
Or is it like the priests who talk about the end of the world every so often and it never stops?
Read 4 tweets
EXCUSE me? Y'all have some DAMN nerve. Read the thread and read this response.
Read the entire thread here, which is literally just screenshots of the dude, , and read this response. Y'all have some DAMN nerve.
White dudes would rather come to the defense of men with ZERO expertise in AI claiming to be "AI researchers," leading full blown apocalyptic cults talking about AIR STRIKES, & here we have a leader of a lab coming to his defense after seeing a detailed thread with his writings.
I recommend that everyone read this entire thread & THEN understand the context that those of us who're not into white supremacist apocalyptic cults have to survive in. Its a damn miracle that there are any of us in this field at all, SMH.
Read 5 tweets
The very first citation in this stupid letter is to our #StochasticParrots Paper,

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"

EXCEPT
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."

They basically say the opposite of what we say and cite our paper?
The rest of the people they cite in footnote #1 are all longtermists. Again, please read
currentaffairs.org/2021/07/the-da…
Read 10 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!