@60Minutes@timnitGebru Meanwhile, the way that MSFT's Brad Smith is grinning as Stahl describes the horrific things that the Bing chatbot was saying. And then he breezily said: "We fixed it in 24 hours! How many problems are fixable in 24 hours?"
@60Minutes@timnitGebru But the fix wasn't anything internal to their chatbot. Rather, it was a change to the UI, i.e. change the ways in which people can interact with the system (limits on the length of conversations).
@60Minutes@timnitGebru And the MSFT folks are just platformed with their unmitigated #AIhype ("It goes out onto the internet, using the power of AI and it reads those links...")
And LOL @ Brad Smith bragging "It has been the case that with each passing day and week we have been able to improve the accuracy of the results" (from BingGPT).
>>
MSFT and OpenAI (and Google with Bard) are doing the equivalent of an oil spill into our information ecosystem. And then trying to get credit for cleaning up bits of the pollution here and there.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice that your field was invaded by the Altmans of the world, but sitting by quietly while they spew nonsense is a choice.
>>
Likewise, describing your own work in terms of unmotivated and aspirational analogies to human cognitive abilities is also a choice.
>>
If you feel like it wouldn't be interesting without that window dressing, it's time to take a good hard look at the scientific validity of what you are doing, for sure.
But one thing is for sure: ["AI" is] a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.
The reactions to this thread have been an interesting mix --- mostly folks are in agreement and supportive. However, there are a few patterns in the negative responses that I think are worth summarizing:
Some folks are very upset with my tone and really feel like I should be more gentle with the poor poor billionaire.
¯\_(ツ)_/¯
>>
A variant of this seems to be the assumption that I'm trying to get OpenAI to actually change their ways and that I'd be more likely to succeed if I just talked to them more nicely.
@OpenAI@sama From the get-go this is just gross. They think they are really in the business of developing/shaping "AGI". And they think they are positioned to decide what "benefits all of humanity".
Then @sama invites the reader to imagine that AGI ("if succesfully created") is literally magic. Also, What does "turbocharging the economy" mean, if there is already abundance? More $$$ for the super rich, has to be.
Blackman uses Microsoft's own AI Principles to clearly explain why BingGPT shouldn't be released into the world. He's right to praise Microsoft's principles and also spot on in his analysis of how the development of BingGPT violates them.
>>
And, as Blackman argues, this whole episode shows how self-regulation isn't going to suffice. Without regulation providing guardrails, the profit motive incentivizes a race to the bottom --- even in cases of clear risk to longer term reputation (and profit).
The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.
>>
@nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯
>>
@nytimes@kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.