Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack!

blog.google/technology/ai/…

#MathyMath #AIHype
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!

There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".

>> Screencap: "AI is the most profound technology we are w
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?

#AIHype #InAweOfScale

>> Screencap: "Since then we’ve continued to make invest
And then a few glowing paragraphs about "Bard", which seems to be the direct #ChatGPT competitor, built off of LaMDA. Note the selling point of broad topic coverage: that is, leaning into the way in which apparent fluency on many topics provokes unearned trust.

>> Screencap: "Bard seeks to combine the breadth of the wo
Let's sit with that prev quote a bit longer. No, the web is not "the world's knowledge" nor does the info on the web represent the "breadth" of same. Also, large language models are neither intelligent nor creative.

>>
Next some reassurance that they're using the lightweight version, so that when millions of people use it every day, it's a smaller amount of electricity (~ carbon footprint) multiplied by millions. Okay, better than the heavyweight version, but just how much carbon, Sundar?

>>
"High bar for quality, safety and groundedness" in the prev quote links to this page:

ai.googleblog.com/2022/01/lamda-…

Reminder: The state of the art for providing the source of the information you are linking to is 100%, when what you return is a link, rather than synthetic text.

>>
Finally, we get Sundar/Google promising exactly what @chirag_shah and I warned against in our paper "Situating Search" (CHIIR 2022): It is harmful to human sense making, information literacy and learning for the computer to do this distilling work, even when it's not wrong. >> Screencap: "One of the most exciting opportunities is hScreencap: "AI can be helpful in these moments, synthes
Why aren't chatbots good replacements for search engines? See this thread:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @emilymbender@dair-community.social on Mastodon

@emilymbender@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Feb 6
"We come to bury ChatGPT, not to praise it." Excellent piece by @danmcquillan

danmcquillan.org/chatgpt.html

I suggest you read the whole thing, but some pull quotes:

>>
@danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan

>>
"The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated."

-- @danmcquillan

>>
Read 5 tweets
Jan 9
In the context of the Koko/GPT-3 trainwreck I'm reminded of @mathbabedotorg 's book _The Shame Machine_ penguinrandomhouse.com/books/606203/t…

>>
@mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics.

>>
It seems that part of the #BigData #mathymath #ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >>
Read 5 tweets
Dec 27, 2022
There's a certain kind of techbro who thinks it's a knock-down argument to say "Well, you haven't built anything". As if the only people whose expertise counts are those close to the machine. I'm reminded (again) of @timnitGebru 's wise comments on "the hierarchy of knowledge".>>
I've been pondering some recently about where that hierarchy comes from. It's surely reinforced by the way that $$ (both commercial and, sadly, federal research funds) tends to flow --- and people mistaking VCs, for example, as wise decision makers.

>>
But I also think that some of it has roots in the way different subjects are taught. Math & CS are both (frequently) taught in very gate-keepy ways (think weeder classes) and also students are evaluated with very cut & dried exams.

>>
Read 20 tweets
Dec 24, 2022
Trying out You.com because people are excited about their chat bot. First observation: Their disclaimer. Here's this thing we're putting up for everyone to use while also knowing (and saying) that it actually doesn't work. Screencap from You.com. Under the box that says "Ask me
Second observation: The footnotes, allegedly giving the source of the information provided in chatbot style, are difficult to interpret. How much of that paragraph is actually sourced from the relevant page? Where does the other "info" come from? Screencap of YouChat's response to "how do I avoid gett
A few of the queries I tried returned paragraphs with no footnotes at all.

>>
Read 5 tweets
Dec 24, 2022
Chatbots are not a good replacement for search engines

iai.tv/articles/all-k…
Chatbots are not a good UI design for information access needs

technologyreview.com/2022/03/29/104…
Chatbots-as-search is an idea based on optimizing for convenience. But convenience is often at odds with what we need to be doing as we access and assess in formation.

washington.edu/news/2022/03/1…
Read 6 tweets
Dec 14, 2022
We're seeing multiple folks in #NLProc who *should know better* bragging about using #ChatGPT to help them write papers. So, I guess we need a thread of why this a bad idea:

>>
1- The writing is part of the doing of science. Yes, even the related work section. I tell my students: Your job there is show how your work is building on what has gone before. This requires understanding what has gone before and reasoning about the difference.

>>
The result is a short summary for others to read that you the author vouch for as accurate. In general, the practice of writing these sections in #NLProc (and I'm guessing CS generally) is pretty terrible. But off-loading this to text synthesizers is to make it worse.

>>
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(