Discover and read the best of Twitter Threads about #GPT2

Most recents (7)

Last week I raised concerns about using #gpt3 in production because it can easily output toxic language that propagates harmful biases. I thought it was a pretty uncontroversial stance but the responses ranged from complete misunderstanding of AI to total irresponsibility. 1/13
I am a big fan of @OpenAI’s research. It is often very original in ways that more traditional research labs, like my own team, tend to ignore. While #gpt3 doesn’t bring any algorithmic innovation, the zero-to-few shot approach as a universal language API is groundbreaking. 2/13
I do take exception with some of @OpenAI’s PR though. In particular, I don’t understand how we went from #gpt2 being too big a threat to humanity to be released openly to #gpt3 being ready to tweet, support customers or execute shell commands (beta.openai.com). 3/13
Read 13 tweets
I started feeding a GPT-2 1558M model Alice's Adventures in Wonderland. If you wanted to read a story generated by a machine, now's your chance. #AliceInWonderland #gpt2 #MachineLearning #literature
Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do: once or twice she had peeped into the book her sister was reading, but it had no pictures or conversations in it,
"and what is the use of a book," thought Alice "without pictures or conversations?"

So she was considering in her own mind (as well as she could, for the hot day made her feel very sleepy and stupid), whether the pleasure of making a daisy-chain would be worth the trouble
Read 19 tweets
I've been playing with the GPT-2 neural net text generators, and they produce interesting results when primed with two disparate themes, e.g. secure C++ coding meets trashy romance novel:
Most of the time GPT-2 will veer towards a single theme and forget the other. However occasionally it produces absolute gold. e.g. in this alternative universe Brent ignores Sylvia's advice and takes a risk:
Two alternative takes on the Rust language vs. the zombie apocalypse:
Read 8 tweets
"The only good time to be alive is the time after you've died."
-- Anonymous AI #GPT2
"Happiness is like a rose by itself in the garden; when it blooms, the birds will sing and the bees will make honey."
-- Anonymous AI #GPT2
"The difference between the artist and the critic is that the critic looks for a pattern, while the artist recognizes the pattern."
-- Anonymous AI #GPT2
Read 22 tweets
First line of famous poems continued by the @openAI GPT-2 example model from "Language Models are Unsupervised Multitask Learners" #gpt2poetry #GPT2
👇🏼
It little profits that an idle king,
who loves his throne for a moment to enjoy a good meal,
might, if he was not in the right position,
become the subject of a great banquet.
But as the royal household will do,
so too shall their subjects. #gpt2poetry #tennyson
That is no country for old men.
They are old men of our land, of old times.
Nowhere is this more true than in these two cases,
as it now goes: the United States of America,
the oldest people on this Earth, are all in a state of rebellion. #gpt2poetry #yeats
Read 9 tweets
#GPT2 is published and suddenly every ML researcher is an expert on infosec. so much interest in dual-use issues _literally_ overnight is awesome! let's not confuse knowing the limitations of language modeling with knowing how threat actors operate (1/n)
I'm seeing a lot of anecdotal and hypothetical arguments from people with zero background in this kind of work (on both sides of the weights' non-release debate). for a field that treasures empirical study, this is perplexing.
when the media, marketeers, and scientists from other fields mischaracterize our work, we get upset. it's dismissive and damaging to everything we work for. it causes stakeholders and decision makers to make misinformed judgments. why would we risk doing similar to infosec?
Read 9 tweets
I'd like to weigh in on the #GPT2 discussion. The decision not to release the trained model was carefully considered and important for norm-forming. Serving the public good requires us to draw lines on release somewhere: better long before catastrophe than after.
Disclaimer before I dive in: I work at @OpenAI. I was not involved in this research project. This thread represents my personal opinions and not OpenAI's. Now that that's out of the way:
I've seen criticism fall into a few camps: 1) claims that OpenAI should have released everything for reproducibility's sake, 2) claims that OpenAI is feeding a harmful hype cycle, 3) claims that this was the wrong point for drawing the line, and sadly, 4) derision and mockery.
Read 25 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!