Blackman uses Microsoft's own AI Principles to clearly explain why BingGPT shouldn't be released into the world. He's right to praise Microsoft's principles and also spot on in his analysis of how the development of BingGPT violates them.
>>
And, as Blackman argues, this whole episode shows how self-regulation isn't going to suffice. Without regulation providing guardrails, the profit motive incentivizes a race to the bottom --- even in cases of clear risk to longer term reputation (and profit).
>>
There's still a bit of #AIhype in the piece though. This phrasing suggests that BingGPT is somehow an agent acting on the world (that needs to be controlled) rather than a piece of technology, not fit for purpose, and insufficiently tested.
>>
That aside, it's a really good piece (and definitely not par for the course for NYT tech coverage). It shows the value of the work done by the FATE team at Micorosft, as well as how problematic it is that their ability to influence corporate decision making is limited.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@OpenAI@sama From the get-go this is just gross. They think they are really in the business of developing/shaping "AGI". And they think they are positioned to decide what "benefits all of humanity".
Then @sama invites the reader to imagine that AGI ("if succesfully created") is literally magic. Also, What does "turbocharging the economy" mean, if there is already abundance? More $$$ for the super rich, has to be.
The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.
>>
@nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯
>>
@nytimes@kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.
Hey journalists -- I know your work is extremely hectic and I get it. I understand that you might make plans for something and then have to pivot to an entirely different topic. That's cool.
BUT:
If you ask an expert for their time same day at a specific time, and they say yes, and then you don't reply, even though said expert has made time for you -- that is NOT OK.
Engaging with the media is actually an additional layer of work over everything else that I do (including the work that builds the expertise that you are interviewing me about). I'm willing to do it because I think it's important.
TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models".
That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it.
>>
NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago.
Started listening to an episode about #ChatGPT on one of my favorite podcasts --- great hosts, usually get great guests and was floored by how awful it was.
>>
Guest blythely claims that large language models learn language like kids to (and also had really uninformed opinions about child language acquisition) ... and that they end up "understanding" language.
>>
The guest also asserted that the robots.txt "soft standard" was an effective way to prevent pages from being crawled (as if all crawlers respect that) & that surely something is already available to do the same to block creative content from getting appropriated as training data.
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!
There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".
>>
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?