To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology.
>>
#AIhype isn't the only problem, for sure, but it is definitely a problem and one that exacerbates others. If LLMs are maybe showing the "first sparks of AGI" (they are NOT) then it's easier to sell them as reasonable information access systems (they are NOT).
>>
If (even) the people arguing for a moratorium on AI development do so bc they ostensibly fear the "AIs" becoming too powerful, they are lending credibility to every politician who wants to gut social services by having them allocated by "AIs" that are surely "smart" and "fair".>>
If the call for "AI safety" is couched in terms of protecting humanity from rogue AIs, it very conveniently displaces accountability away from the corporations scaling harm in the name of profits.
>>
It's frankly infuriating to read a signatory to the "AI pause" letter complaining that the statement we released from the listed authors of the Stochastic Parrots paper somehow squandered the "opportunity" created by they "AI pause" letter in the first place.
>>
Yes, we need regulation. But as we said: "We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
A bunch of AI researchers high on their own supply wrote a ridiculous letter and got famous people including a certain billionaire man-child to sign, and in the process misappropriated our work. So we speak up and somehow we're at fault? I think NOT.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype. Here's a quick rundown.
>>
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.
Ugh -- I'm seeing a lot of commentary along the lines of "'stochastic parrot' might have been an okay characterization of previous models, but GPT-4 actually is intelligent."
Spoiler alert: It's not. Also, stop being so credulous.
>>
(Some of this I see because it's tweeted at me, but more of it comes to me by way of the standing search I have on the phrase "stochastic parrots" and its variants. The tweets in that column have been getting progressively more toxic over the past couple of months.)
>>
What's particularly galling about this is that people are making these claims about a system that they don't have anywhere near full information about. Reminder that OpenAI said "for safety" they won't disclose training data, model architecture, etc.
From the abstract of this 154 page novella: "We contend that (this early version of) GPT-4 is part of a new cohort of LLMs [...] that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models."
>>
And "We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting."
A few choice quotes (but really, read the whole thing, it's great!):
>>
"The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose."
"Should you even be making or selling it?"
"Are you effectively mitigating the risks?"
"Are you over-relying on post-release detection?"
"Are you misleading people about what they’re seeing, hearing, or reading?"
1. Open access publishing is important 2. Peer review is not perfect 3. Community-based vetting of research is key 4. A system for by-passing such vetting muddies the scientific information ecosystem
Yes, this is both a subtweet of arXiv and of every time anyone cites an actually reviewed & published paper by just pointing to its arXiv version, just further lending credibility to all the nonsense that people "publish" on arXiv and then race to read & promote.
Shout out to the amazing @aclanthology which provides open access publishing for most #compling / #NLProc venues and to all the hardworking folks within ACL reviewing & looking to improve the reviewing process.
Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole.
>>
Things they aren't telling us: 1) What data it's trained on 2) What the carbon footprint was 3) Architecture 4) Training method
>>
But they do make sure to spend a page and half talking about how they vewwy carefuwwy tested to make sure that it doesn't have "emergent properties" that would let is "create and act on long-term plans" (sec 2.9).