@emilymbender@dair-community.social on Mastodon Profile picture
Prof, Linguistics, UW // Faculty Director, CLMS // she/her // @emilymbender@dair-community.social & bsky // rep by @ianbonaparte
10 subscribers
Feb 29 10 tweets 3 min read
It seems like there are just endless bad ideas about how to use "AI". Here are some new ones courtesy of the UK government.

... and a short thread because there is so much awfulness in this one article.
/1


ft.com/content/f2ae55…
Screencap: "UK ministers are piloting the use of generative artificial intelligence to analyse responses to government consultations and write draft answers to parliamentary questions.  Oliver Dowden, the deputy prime minister, will on Thursday unveil tools that the AI “crack squad” at the heart of Whitehall is trialling with a view to wider rollouts across central departments and public services." Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2 Screencap: "The AI tools include using government-hosted versions of ChatGPT and a mix of open-source AI models securely hosted in-house to draft preliminary responses to questions to ministers submitted by MPs and to freedom of information requests.  The drafts would always be checked by a human civil servant and the AI tools are programmed to ensure they cite their sources on all claims, so they can be verified."
Jan 14 11 tweets 2 min read
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote. Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.

>>
Dec 7, 2023 20 tweets 5 min read
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/ #1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
Nov 24, 2023 27 tweets 7 min read
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/ As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
Jun 11, 2023 7 tweets 2 min read
There's a lot I like in this op-ed, but unfortunately it ends with some gratuitous ableism (and also weird remarks about AGI as a "holy grail").

First, the good parts:

theguardian.com/commentisfree/… "[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"

>>
Jun 3, 2023 8 tweets 2 min read
I'm so tired of this argument. The "AI doomers" are not natural allies of the folks who have been documenting the real-world harms of so-called AI systems: discrimination, surveillance, pollution of the information ecosystem, data theft, labor exploitation.

>> Those harms are real, they're being done by people to people using technology.

>>
Jun 1, 2023 11 tweets 3 min read
Here is another case study in how anthropomorphization feeds AI hype --- and now AI doomerism.

vice.com/en/article/4a3… The headline starts it off with "Goes Rogue". That's a predicate that is used to describe people, not tools. (Also, I'm fairly sure no one actually died, but the headline could be clearer about that, too.)

>>
Apr 17, 2023 15 tweets 7 min read
This is so painful to watch. @60Minutes and @sundarpichai working in concert to heap on the #AIHype. Partial transcript (that I just typed up) and reactions from me follow: @60Minutes @sundarpichai Reporter: "Of the AI issues we walked about, the most mysterious is called 'emergent properties'. Some AI systems are teaching themselves skills that they weren't expected to have."

"Emergent properties" seems to be the respectable way of saying "AGI". It's still bullshit.

>>
Apr 3, 2023 7 tweets 2 min read
To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology.
>> #AIhype isn't the only problem, for sure, but it is definitely a problem and one that exacerbates others. If LLMs are maybe showing the "first sparks of AGI" (they are NOT) then it's easier to sell them as reasonable information access systems (they are NOT).

>>
Mar 29, 2023 28 tweets 7 min read
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype. Here's a quick rundown.

>> First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

futureoflife.org/open-letter/pa…

>>
Mar 27, 2023 4 tweets 2 min read
Ugh -- I'm seeing a lot of commentary along the lines of "'stochastic parrot' might have been an okay characterization of previous models, but GPT-4 actually is intelligent."

Spoiler alert: It's not. Also, stop being so credulous.

>> (Some of this I see because it's tweeted at me, but more of it comes to me by way of the standing search I have on the phrase "stochastic parrots" and its variants. The tweets in that column have been getting progressively more toxic over the past couple of months.)

>>
Mar 23, 2023 6 tweets 1 min read
Remember when you went to Microsoft for stodgy but basically functional software and the bookstore for speculative fiction?

arXiv may have been useful in physics and math (and other parts of CS) but it's a cesspool in "AI"—a reservoir for hype infections

arxiv.org/abs/2303.12712 From the abstract of this 154 page novella: "We contend that (this early version of) GPT-4 is part of a new cohort of LLMs [...] that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models."

>>
Mar 21, 2023 5 tweets 2 min read
More 🔥🔥🔥 from the FTC!

ftc.gov/business-guida…

A few choice quotes (but really, read the whole thing, it's great!):

>> "The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose."

ftc.gov/business-guida…

>>
Mar 20, 2023 5 tweets 2 min read
Several things that can all be true at once:

1. Open access publishing is important
2. Peer review is not perfect
3. Community-based vetting of research is key
4. A system for by-passing such vetting muddies the scientific information ecosystem Yes, this is both a subtweet of arXiv and of every time anyone cites an actually reviewed & published paper by just pointing to its arXiv version, just further lending credibility to all the nonsense that people "publish" on arXiv and then race to read & promote.
Mar 15, 2023 9 tweets 2 min read
Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole.

>> Things they aren't telling us:
1) What data it's trained on
2) What the carbon footprint was
3) Architecture
4) Training method

>>
Mar 14, 2023 7 tweets 2 min read
A journalist asked me to comment on the release of GPT-4 a few days ago. I generally don't like commenting on what I haven't seen, but here is what I said:

#DataDocumentation #AIhype

>> "One thing that is top of mind for me ahead of the release of GPT-4 is OpenAI's abysmal track record in providing documentation of their models and the datasets they are trained on.

>>
Mar 14, 2023 9 tweets 4 min read
MSFT lays off its responsible AI team

The thing that strikes me most about this story from @ZoeSchiffer and @CaseyNewton is the way in which the MSFT execs describe the urgency to move "AI models into the hands of customers"

platformer.news/p/microsoft-ju…

>> @ZoeSchiffer @CaseyNewton There is no urgency to build "AI". There is no urgency to use "AI". There is no benefit to this race aside from (perceived) short-term profit gains.

>>
Mar 6, 2023 4 tweets 1 min read
I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice that your field was invaded by the Altmans of the world, but sitting by quietly while they spew nonsense is a choice.

>> Likewise, describing your own work in terms of unmotivated and aspirational analogies to human cognitive abilities is also a choice.

>>
Mar 6, 2023 6 tweets 3 min read
I really don't understand why @60Minutes relegated this to their "overtime" segment. @timnitGebru is here with the most important points:

cbsnews.com/news/chatgpt-l… @60Minutes @timnitGebru Meanwhile, the way that MSFT's Brad Smith is grinning as Stahl describes the horrific things that the Bing chatbot was saying. And then he breezily said: "We fixed it in 24 hours! How many problems are fixable in 24 hours?"

cbsnews.com/news/chatgpt-l…

>>
Mar 5, 2023 13 tweets 4 min read
Finally had a moment to read this statement from the FTC and it is 🔥🔥🔥

ftc.gov/business-guida…

A few choice quotes: But one thing is for sure: ["AI" is] a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.

ftc.gov/business-guida…

>>
Feb 27, 2023 9 tweets 2 min read
The reactions to this thread have been an interesting mix --- mostly folks are in agreement and supportive. However, there are a few patterns in the negative responses that I think are worth summarizing: Some folks are very upset with my tone and really feel like I should be more gentle with the poor poor billionaire.

¯\_(ツ)_/¯

>>