My Authors
Read all threads
Last week I raised concerns about using #gpt3 in production because it can easily output toxic language that propagates harmful biases. I thought it was a pretty uncontroversial stance but the responses ranged from complete misunderstanding of AI to total irresponsibility. 1/13
I am a big fan of @OpenAI’s research. It is often very original in ways that more traditional research labs, like my own team, tend to ignore. While #gpt3 doesn’t bring any algorithmic innovation, the zero-to-few shot approach as a universal language API is groundbreaking. 2/13
I do take exception with some of @OpenAI’s PR though. In particular, I don’t understand how we went from #gpt2 being too big a threat to humanity to be released openly to #gpt3 being ready to tweet, support customers or execute shell commands (beta.openai.com). 3/13
Instead I wish @OpenAI had been more open and less sensationalistic, by just open sourcing both for research, especially on #responsibleAI aspects, while acknowledging that neither was ready for production and discouraging services like thoughts.sushant-kumar.com 4/13
One criticism I got was that I cherry picked my examples. Ignoring the fact that 100% of examples touting #gpt3 on Twitter are cherry picked, greatly inflating its perceived performance, cherry picking is a valid approach when highlighting harmful outputs. 5/13
This is a challenge with our current AI benchmarks which do not properly weigh harmful outputs. Even one very bad output in a million in a prod app (eg customer service) can be unacceptable, as shown by the deserved backlash my team got for bad machine translations on FB. 6/13
In this case, it just took a handful of tries to generate toxic #gpt3 outputs from neutral, not even adversarial, inputs. AI algorithms need to be a lot more robust to be productized. The ease of generating these toxic outputs is what prompted my decision to share them. 7/13
Another criticism was that #gpt3 was just reiterating what humans think. Yes AI algorithms do learn from humans but a deliberate choice can be made about which humans they learn from and which voices are amplified. 8/13
Just culling any data from the web or Reddit because it’s available is not a responsible training strategy. It will lead to amplifying unchecked biases, some very harmful. And we need objective functions that discourage toxic speech in the same way we do it in real life. 9/13
Others pointed that being at FB, I was badly placed to make this point. FB and my own team do indeed need to do better on this. But FB is also in an arm race against hate speech and misinformation, and AI needs to help rather than make the pb worse spectrum.ieee.org/computing/soft… 10/13
Finally by far the most disturbing criticism I got was from @paulg who compared my point to forcing AIs to be politically correct. 11/13
This is a bizarre anthropomorphic view that makes little sense. AIs are not people but algorithms created by humans making deliberate design choices (eg, model, objective, training data). When AIs make sexist or racist statements, these humans should be responsible for it. 12/13
We need to make AI developers and researchers responsible for what they create. Claiming “unintended consequences” is what lead to the current distrust in the tech industry. We can’t let AI become the poster child of that irresponsibility. We need more #responsibleAI now. 13/13
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Jerome Pesenti

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!