My friend said that Yoshua Bengio is on the NYT talking about how "AI systems" will be "fully independent" in a decade & such. I don't know what to say at this point. Maybe #TESCREAL influence + the arrogance among AI ppl who want to feel like they're working on a literal god.
Are there any consequences when in 10 years it doesn't happen? Like how Hinton said radiologists will be gone by 5 years (and that was 5 years ago)? Or the fact that "the singularity" hasn't happened and they just "update" their dates?
Or is it like the priests who talk about the end of the world every so often and it never stops?
How many people are there right now who are preaching that tomorrow, or some time near, will be the end of the world?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Page 15 of The Continent highlights our warnings for Africans not to fall into the hype & hand data to OpenAI + co., & instead support local efforts where $$ goes to speakers of languages whose data is being taken, @alienelf & @tamati_biskit talk about @LelapaAI's vision.
"“These programs are built by the West on data from the West..represent their values & principles,” said Abbott, who notes that African perspectives and
history are largely excluded from the datasets used by OpenAI & Google’s LLMs. That’s because they cannot easily be “scraped”.
"For Lelapa, this represents an opportunity. Because African data is so hard to access, OpenAI & Google will struggle to make its tools work effectively on the continent – leaving a gap in the market for a homegrown alternative. “The fact that ChatGPT fails on our languages...
Lets review some of the #TESCREAL institutes & people quoted in this article. The Future of Humanity Institute: founded & led by Nick Bostrom, you know, this dude "Blacks are more stupid than whites" guy, who also later uses the N word.
Oh and an even worse "apology" when he realized his email was about to be published. As if he hasn't been talking about "dysgenic pressures" as an existential risk. But minor things not to worry ourselves with while thinking about the whole of HUMANITIY. vice.com/en/article/z34…
And his fellow Sandberg who defends him with this gem:
When OpenAI launched...it sought "to advance digital intelligence in the way that is most likely to benefit humanity..., unconstrained by a need to generate financial return."...to save us from AI, they first had to build it.🙄
by @meliarobin & @mjnblack businessinsider.com/sam-altman-ope…
But then to be "unconstrained by a need to generate financial return" they hat to get that $10B right?
So much rationality and saving humanity by these people.
""It's Sam's world," said Ric Burton, a prominent tech developer, "and we're all living in it."
Which prompts the question: Is it a world we want?"
I already know the answer to this but I believe the rest of the article is going to elaborate.
"To spare them the embarrassment of dealing directly with Sudan’s war criminal dictator, the EU chose intermediaries to pass logistics and funds to Bashir’s regime. Those included ministries of interior of Italy, France, Germany and even...some UN organisations like UNHCR."
"Out of €1.2 Billion allocated through the trust fund, at least €250 million was paid to the regime, mostly in form of logistics, training and cash. The trust fund was deliberately designed to mix much-needed humanitarian aid with the goal of clamping down on refugees."
"In a policy statement, the Commission said the agency is committed to combatting unfair or deceptive acts and practices related to the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies." ftc.gov/news-events/ne…
Recent years have seen a proliferation of biometric information technologies. For instance, facial, iris, or fingerprint recognition technologies collect and process biometric information to identify individuals."
"Other biometric information technologies use or claim to use biometric information in order to determine characteristics of individuals, ranging from the individuals’ age, gender, or race to the individuals’ personality traits, aptitudes, or demeanor."
"Why would you, a CEO or executive at a high-profile technology company...proclaim how worried you are about the product you are building and selling?
Answer: If apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy." latimes.com/business/techn…
"OpenAI has worked for years to carefully cultivate an image of itself as a team of hype-proofed humanitarian scientists, pursuing AI for the good of all — which meant that when its moment arrived, the public would be well-primed to receive its apocalyptic AI proclamations...
credulously, as scary but impossible-to-ignore truths about the state of technology."
This is why I was so angry when they were announced as such in 2015.