I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things:

wired.com/story/openai-c…

>>

#ChatGPT #LLM #MathyMath
@willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >> Screenshot from linked arti...
@willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists.

>> Screencap: Reddy, CEO of Ab...Screencap: Reddy, the AI st...
@willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder*

>>
Anyway, longer version of what I said to Will:

OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions.

>>
Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine.

>>
Re difference to other chatbots:

The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting.

>>
They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]"

>>
It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw.

>>
And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article:

wired.com/story/openai-c…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @emilymbender@dair-community.social on Mastodon

@emilymbender@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Dec 1
Just so everyone is clear: ChatGPT is still just a language model: just a text synthesis machine/random BS generator. Its training has honed the form of that BA a bit further, including training to avoid things that *look like* certain topics, but there's still no there there. Screencap: "Limitations ChatGPT sometimes writes plausi
That "Limitations" section has it wrong though. ChatGPT generates strings based on combinations of words from its training data. When it sometimes appears to say things that are correct and sensible when a human makes sense of them, that's only by chance.

>>
Also the link under "depends on what the model knows" in that screencap points to the "AI Alignment Forum" which looks like one of the message boards from the EA/Longtermist cult. For more on what that is and the damage it's doing, see @timnitgebru 's:

wired.com/story/effectiv…
Read 4 tweets
Nov 28
🪜 Building taller and taller ladders won't get you to the moon -- ?
🏃‍♀️ Running faster doesn't get you closer to teleportation -- me
⏱️ "dramatically improving the precision or efficiency of clock technology does not lead to a time travel device" -- @fchollet
@fchollet All helpful metaphors, I think, for explaining why it's foolish to believe that deep learning (useful as it may be) isn't a path towards what @fchollet calls "cognitive autonomy".

[I couldn't quickly turn up the source for the ladder one, and would be grateful for leads.]

>>
@fchollet Somehow, the current conversation & economy around #AI have left us in a place where the people who claim the opposite don't carry the burden of proof and/or try to discharge it with cherry picked examples.
Read 5 tweets
Nov 16
Facebook (sorry: Meta) AI: Check out our "AI" that lets you access all of humanity's knowledge.
Also Facebook AI: Be careful though, it just makes shit up.

This isn't even "they were so busy asking if they could"—but rather they failed to spend 5 minutes asking if they could.
>> Screencap from https://gala...Screencap from https://gala...
Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, from a social media company. Fortunately, @chirag_shah and I already wrote the paper laying that all out:

dl.acm.org/doi/10.1145/34…

>>
In the popular press/general public-facing Q&A about our paper:

technologyreview.com/2022/03/29/104…

washington.edu/news/2022/03/1…

>>
Read 11 tweets
Oct 29
Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician".
vice.com/en/article/jgp…

/1
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.

/2
I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable.

/3
Read 14 tweets
Oct 28
I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...

…mail-com.offcampus.lib.washington.edu/business/rob-m…

>>
First off, it's boring. I wouldn't have made it past the first couple of paragraphs, except the reporter had talked to me so I was (increasingly morbidly) curious how my words were being used.

>>
The second paragraph (and several others) is actually the output of their LLM. This is flagged in the subhead and in the third paragraph. I still think it's terrible journalistic practice.

>> Screencap: "Before Aid...Screecap: "could becom...
Read 17 tweets
Oct 16
Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying paper. The headline for today's lesson:

fiercebiotech.com/medtech/ai-spo…

/1 Screencap of headline of th...
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).

/2
Let's pause to once again note the use of "AI" in this way suggests that "artificial intelligence" is a thing that exists. Always useful to replace that term with "mathy math" or SALAMI for a reality check.

/3
Read 26 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(