Discover and read the best of Twitter Threads about #AIHype

Most recents (24)

#Google's #AIHype Circle: We have to do #Bard because everyone else is doing #AI; everyone else is doing AI because we're doing Bard.

doctorow.medium.com/googles-ai-hyp…  An anatomical cutaway of a...
Google's plummeting search quality made the company desperate to please its "activist investors"

doctorow.medium.com/googles-ai-hyp…

#SecurityThroughObscurity #SearchQuality #Enshittification #Google Having to build a search-ra...
Google can improve search, or it can chase stock gains through #AIHype. It chose the latter

doctorow.medium.com/googles-ai-hyp… So Google finds itself on t...
Read 5 tweets
This is so painful to watch. @60Minutes and @sundarpichai working in concert to heap on the #AIHype. Partial transcript (that I just typed up) and reactions from me follow:
@60Minutes @sundarpichai Reporter: "Of the AI issues we walked about, the most mysterious is called 'emergent properties'. Some AI systems are teaching themselves skills that they weren't expected to have."

"Emergent properties" seems to be the respectable way of saying "AGI". It's still bullshit.

>>
As @mmitchell_ai points out (read her whole thread; it's great) if you create ignorance about the training data, of course system performance will be surprising.



>>
Read 15 tweets
1/Behavioral Health Link develops & markets #CrisisLines services to #988Lifeline providers. One of their offerings: "Voice-Currently in R&D, voice analysis technology to provide real time feedback on a caller’s emotional state to improve care delivery."web.archive.org/web/2022120220…
2/I've started mentioning Dr. John Draper's connection there, recently hired as their President of R & D. You may recall Draper was the long-time leader of Vibrant Emotional Health's administration of the Nat'l Suicide Prevention Lifeline,now #988Lifeline. linkedin.com/in/john-draper…
3/Between end of March & now, Behavioral Health Link took down its page of software solutions. Maybe it will go back up but I'd like to know why it's gone. Seems they've sanitized the site of their " #AI " type services. No doubt still in development. web.archive.org/web/2023041614…
Read 11 tweets
#Enshittification is platforms devouring themselves: first they tempt users with goodies. Once users are locked in, goodies are withdrawn and dangled before businesses. Once business customers are stuck, all value is claimed for platform shareholders:

pluralistic.net/2023/01/21/pot…

1/ A complex mandala of knobs ...
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2023/04/12/alg…

2/
Enshittification isn't just another way of saying "fraud" or "price gouging" or "wage theft." Enshittification is intrinsically digital, because moving all those goodies around requires the flexibility that only comes with a *digital* businesses.

3/
Read 107 tweets
To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology.
>>
#AIhype isn't the only problem, for sure, but it is definitely a problem and one that exacerbates others. If LLMs are maybe showing the "first sparks of AGI" (they are NOT) then it's easier to sell them as reasonable information access systems (they are NOT).

>>
If (even) the people arguing for a moratorium on AI development do so bc they ostensibly fear the "AIs" becoming too powerful, they are lending credibility to every politician who wants to gut social services by having them allocated by "AIs" that are surely "smart" and "fair".>>
Read 7 tweets
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype. Here's a quick rundown.

>>
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

futureoflife.org/open-letter/pa…

>>
For some context, see:

aeon.co/essays/why-lon…

So that already tells you something about where this is coming from. This is gonna be a hot mess.

>>
Read 28 tweets
A journalist asked me to comment on the release of GPT-4 a few days ago. I generally don't like commenting on what I haven't seen, but here is what I said:

#DataDocumentation #AIhype

>>
"One thing that is top of mind for me ahead of the release of GPT-4 is OpenAI's abysmal track record in providing documentation of their models and the datasets they are trained on.

>>
Since at least 2017 there have been multiple proposals for how to do this documentation, each accompanied by arguments for its importance.

>>
Read 7 tweets
MSFT lays off its responsible AI team

The thing that strikes me most about this story from @ZoeSchiffer and @CaseyNewton is the way in which the MSFT execs describe the urgency to move "AI models into the hands of customers"

platformer.news/p/microsoft-ju…

>>
@ZoeSchiffer @CaseyNewton There is no urgency to build "AI". There is no urgency to use "AI". There is no benefit to this race aside from (perceived) short-term profit gains.

>>
@ZoeSchiffer @CaseyNewton It is very telling that when push comes to shove, despite having attracted some very talented, thoughtful, proactive, researchers, the tech cos decide they're better off without ethics/responsible AI teams.

>>
Read 9 tweets
I really don't understand why @60Minutes relegated this to their "overtime" segment. @timnitGebru is here with the most important points:

cbsnews.com/news/chatgpt-l…
@60Minutes @timnitGebru Meanwhile, the way that MSFT's Brad Smith is grinning as Stahl describes the horrific things that the Bing chatbot was saying. And then he breezily said: "We fixed it in 24 hours! How many problems are fixable in 24 hours?"

cbsnews.com/news/chatgpt-l…

>>
@60Minutes @timnitGebru But the fix wasn't anything internal to their chatbot. Rather, it was a change to the UI, i.e. change the ways in which people can interact with the system (limits on the length of conversations).
Read 6 tweets
The NYTimes (famous for publishing transphobia) often has really bad coverage of tech, but I appreciate this opinion pice by Reid Blackman:

nytimes.com/2023/02/23/opi…

>>
Blackman uses Microsoft's own AI Principles to clearly explain why BingGPT shouldn't be released into the world. He's right to praise Microsoft's principles and also spot on in his analysis of how the development of BingGPT violates them.

>>
And, as Blackman argues, this whole episode shows how self-regulation isn't going to suffice. Without regulation providing guardrails, the profit motive incentivizes a race to the bottom --- even in cases of clear risk to longer term reputation (and profit).

>>
Read 5 tweets
The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.

>>
@nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯

>>
@nytimes @kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.

>>
Read 9 tweets
TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models".

#AIHype #MathyMath Screencap of twitter profile of @KrikDBorne. Header image inScreencap of tweet reading "Theory of Mind May Have Spo
That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it.

>>
NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago.

>>
Read 5 tweets
Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack!

blog.google/technology/ai/…

#MathyMath #AIHype
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!

There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".

>> Screencap: "AI is the most profound technology we are w
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?

#AIHype #InAweOfScale

>> Screencap: "Since then we’ve continued to make invest
Read 9 tweets
Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician".
vice.com/en/article/jgp…

/1
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.

/2
I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable.

/3
Read 14 tweets
I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...

…mail-com.offcampus.lib.washington.edu/business/rob-m…

>>
First off, it's boring. I wouldn't have made it past the first couple of paragraphs, except the reporter had talked to me so I was (increasingly morbidly) curious how my words were being used.

>>
The second paragraph (and several others) is actually the output of their LLM. This is flagged in the subhead and in the third paragraph. I still think it's terrible journalistic practice.

>> Screencap: "Before Aid...Screecap: "could becom...
Read 17 tweets
In the effort curb misunderstanding and #AIHype on the topic of language models (LMs), we're circulating a tweet thread to offer a baseline understanding of how systems such as OpenAI's GPT-3 work to deliver sequences of human-like text in response to prompts. /1
We're directing this thread especially to our humanist readers who may be only peripherally aware of the increased commercialization of (and hype about) "artificial intelligence" (AI) text generators.
NB: AI is itself a slippery term: we use it w/ caution.
/2
The best known of these models is OpenAI’s GPT-3, which is licensed by Microsoft. Students can use them to generate human-like text by paying OpenAI directly or subscribing to subsidiary “apps." They may also access less powerful but free models for text generation. /3
Read 37 tweets
Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying paper. The headline for today's lesson:

fiercebiotech.com/medtech/ai-spo…

/1 Screencap of headline of th...
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).

/2
Let's pause to once again note the use of "AI" in this way suggests that "artificial intelligence" is a thing that exists. Always useful to replace that term with "mathy math" or SALAMI for a reality check.

/3
Read 26 tweets
People often ask me if I think computers could ever understand language. You might be surprised to hear that my answer is yes! My quibble isn't with "understand", it's with "human level" and "general".

>>
To answer that question, of course, we need a definition of understanding. I like the one from Bender & @alkoller 2020: Meaning is the relationship between form and something external to language and understanding is retrieving that intent from form.

>> Screencap of first paragrap...
So when I ask a digital voice assistant to set a timer for a specific time, or to retrieve information about the current temperature outside, or to play the radio on a particular station, or to dial a certain contact's phone number and it does the thing: it has understood.

>>
Read 9 tweets
Much of the #AIhype seems to be cases where people confuse the artifact that results from human cognitive (incl creative) activity with the cognitive activity itself.

Which makes me wonder >>
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?

>>
And given the incentive to sell their tech (+ all the world telling them how smart they are, I guess), do they actually believe that now?

>>
Read 5 tweets
Let's do a little #AIhype analysis, shall we? Shotspotter claims to be able to detect gunshots from audio, and its use case is to alert the cops so they can respond.

>>
Q1: Is it plausible that a system could give the purported output (time & location of gunshot) given the inputs (audio recordings from surveillance microphones deployed in a neighborhood)?

>>
A1: At a guess, such a system could detect loud noises that include gunshots (but lots of other things) and might be able to provide some location information (which mics picked it up?) but keep in mind that cityscapes provide lots of opportunities for echos...

>>
Read 12 tweets
This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody. So, for your entertainment/education in spotting #AIhype, I present a brief annotated reading:

theatlantic.com/technology/arc…

/1
Straight out of the gate, he's not just comparing "AI" to "miracles" but flat out calling it one and quoting Google & Tesla (ex-)execs making comparisons to "God" and "demons".

/2 Screencap from linked article: "Miracles can be perplex
This is not the writing of someone who actually knows what #NLProc is. If you use grammar checkers, autocorrect, online translation services, web search, autocaptions, a voice assistant, etc you use NLP technology in everyday life. But guess what? NLP isn't a subfield of "AI".
/3 Screencap, same article: "Early artificial intelligence
Read 25 tweets
*sigh* once again relegated to the critics' box. The framing in this piece leans so hard into the victims (no one believed us) persevering (we showed 'em!) narrative of the deep learning folks. #AIhype ahead:

venturebeat.com/ai/10-years-on…
"Success draws critics", uh nope. I'm not in this conversation because of whatever success deep learning has had. I'm in it because of the unfounded #AIhype and the harms being carried out in the name of so-called "AI".

>> Screencap from article link...
"huge progress ... in some key applications like computer vision and language" --- uh "language" isn't an application, TYVM.

And I am not trying to "take away" any actual progress (e.g. improved ASR, MT). I'm only taking issue with overclaims.

>> Screencap, same article: &q...
Read 12 tweets
Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it is that these folks are utterly failing at ceding any power & how completely mismatched "optimization" is from the goals of doing actual good in the world.
>>
Just a few random excerpts, because it was so painful to read...

>>
"Oh noes! We have too much money, and not enough actual need in today's world."

First: This is such an obvious way in which insisting on only funding the MOST effective things is going to fail. (Assuming that is even knowable.)

>> Screencap reading: "EA...
Read 19 tweets
Some reflections on media coverage of tech/science/research. It seems to me that there are broadly speaking two separate paths of origin for these stories: In one, the journalist sees something that they think the public should be informed of, and digs into the scholarship.

>>
In the other, the researchers have something they want to draw the world's attention to. But there are two subcases here:

Researchers (usually in academia) who see a need for the public to be informed, either acutely (ppl need this info NOW) or long-term (science literacy).
>>
Subcase 2b: PR orgs (usually in industry) want the public to know about their cool research, because it serves as positive marketing for a company, halo effect of competence, etc etc.

>>
Read 13 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!