@willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >>
@willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists.
>>
@willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder*
>>
Anyway, longer version of what I said to Will:
OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions.
>>
Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine.
>>
Re difference to other chatbots:
The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting.
>>
They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]"
>>
It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw.
>>
And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article:
Just so everyone is clear: ChatGPT is still just a language model: just a text synthesis machine/random BS generator. Its training has honed the form of that BA a bit further, including training to avoid things that *look like* certain topics, but there's still no there there.
That "Limitations" section has it wrong though. ChatGPT generates strings based on combinations of words from its training data. When it sometimes appears to say things that are correct and sensible when a human makes sense of them, that's only by chance.
>>
Also the link under "depends on what the model knows" in that screencap points to the "AI Alignment Forum" which looks like one of the message boards from the EA/Longtermist cult. For more on what that is and the damage it's doing, see @timnitgebru 's:
🪜 Building taller and taller ladders won't get you to the moon -- ?
🏃♀️ Running faster doesn't get you closer to teleportation -- me
⏱️ "dramatically improving the precision or efficiency of clock technology does not lead to a time travel device" -- @fchollet
@fchollet All helpful metaphors, I think, for explaining why it's foolish to believe that deep learning (useful as it may be) isn't a path towards what @fchollet calls "cognitive autonomy".
[I couldn't quickly turn up the source for the ladder one, and would be grateful for leads.]
>>
@fchollet Somehow, the current conversation & economy around #AI have left us in a place where the people who claim the opposite don't carry the burden of proof and/or try to discharge it with cherry picked examples.
Facebook (sorry: Meta) AI: Check out our "AI" that lets you access all of humanity's knowledge.
Also Facebook AI: Be careful though, it just makes shit up.
This isn't even "they were so busy asking if they could"—but rather they failed to spend 5 minutes asking if they could.
>>
Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, from a social media company. Fortunately, @chirag_shah and I already wrote the paper laying that all out:
Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician". vice.com/en/article/jgp…
/1
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.
/2
I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable.
/3
I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...
First off, it's boring. I wouldn't have made it past the first couple of paragraphs, except the reporter had talked to me so I was (increasingly morbidly) curious how my words were being used.
>>
The second paragraph (and several others) is actually the output of their LLM. This is flagged in the subhead and in the third paragraph. I still think it's terrible journalistic practice.
Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying paper. The headline for today's lesson:
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).
/2
Let's pause to once again note the use of "AI" in this way suggests that "artificial intelligence" is a thing that exists. Always useful to replace that term with "mathy math" or SALAMI for a reality check.
/3