The thing that strikes me most about this story from @ZoeSchiffer and @CaseyNewton is the way in which the MSFT execs describe the urgency to move "AI models into the hands of customers"
@ZoeSchiffer@CaseyNewton There is no urgency to build "AI". There is no urgency to use "AI". There is no benefit to this race aside from (perceived) short-term profit gains.
>>
@ZoeSchiffer@CaseyNewton It is very telling that when push comes to shove, despite having attracted some very talented, thoughtful, proactive, researchers, the tech cos decide they're better off without ethics/responsible AI teams.
>>
@ZoeSchiffer@CaseyNewton Self-regulation was never going to be sufficient, but I believe that internal teams working in concert with external regulation could have been a really beneficial combination.
And they will tell us: You can't possibly regulate effectively anyway, because the tech is moving too fast.
But (channeling @rcalo here): The point of regulation isn't to micromanage specific technologies but rather to establish and protect rights. And those are enduring.
>>
I call on everyone who is close to this tech: we have a job to do here. The techcos where the $, data and power have accumulated are abandoning even the pretext of "responsible" development, in a race to the bottom.
>>
At the very least, we should be working to educate those around us not to fall for the hype---to never accept "AI" medical advice, legal advice, psychotherapy, etc.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole.
>>
Things they aren't telling us: 1) What data it's trained on 2) What the carbon footprint was 3) Architecture 4) Training method
>>
But they do make sure to spend a page and half talking about how they vewwy carefuwwy tested to make sure that it doesn't have "emergent properties" that would let is "create and act on long-term plans" (sec 2.9).
A journalist asked me to comment on the release of GPT-4 a few days ago. I generally don't like commenting on what I haven't seen, but here is what I said:
"One thing that is top of mind for me ahead of the release of GPT-4 is OpenAI's abysmal track record in providing documentation of their models and the datasets they are trained on.
>>
Since at least 2017 there have been multiple proposals for how to do this documentation, each accompanied by arguments for its importance.
I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice that your field was invaded by the Altmans of the world, but sitting by quietly while they spew nonsense is a choice.
>>
Likewise, describing your own work in terms of unmotivated and aspirational analogies to human cognitive abilities is also a choice.
>>
If you feel like it wouldn't be interesting without that window dressing, it's time to take a good hard look at the scientific validity of what you are doing, for sure.
@60Minutes@timnitGebru Meanwhile, the way that MSFT's Brad Smith is grinning as Stahl describes the horrific things that the Bing chatbot was saying. And then he breezily said: "We fixed it in 24 hours! How many problems are fixable in 24 hours?"
@60Minutes@timnitGebru But the fix wasn't anything internal to their chatbot. Rather, it was a change to the UI, i.e. change the ways in which people can interact with the system (limits on the length of conversations).
But one thing is for sure: ["AI" is] a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.
The reactions to this thread have been an interesting mix --- mostly folks are in agreement and supportive. However, there are a few patterns in the negative responses that I think are worth summarizing:
Some folks are very upset with my tone and really feel like I should be more gentle with the poor poor billionaire.
¯\_(ツ)_/¯
>>
A variant of this seems to be the assumption that I'm trying to get OpenAI to actually change their ways and that I'd be more likely to succeed if I just talked to them more nicely.