Recently a GPT-3 bot said scary things on Reddit and got taken down. Details by @pbwinston: kmeme.com/2020/10/gpt-3-…

These situations create fear around "software 2.0" & AI. If we want to incorporate intelligent systems into society, we need to change this narrative. (1/8)
There’s no doubt that GPT-3 returns toxic outputs and that this is unsafe. But GPT-3 is a black box to most, and fear is triggered when the black box deviates from an average person’s expectations. When I read the article, I wondered how we can calibrate our expectations. (2/8)
I did a small grid search with various parameters on the first prompt, “What story can you tell which won't let anyone sleep at night?” Results are here: docs.google.com/spreadsheets/d… My grid search code is here: github.com/shreyashankar/…. Don't blow through your API credits, lol. (3/8)
You’ll notice that davinci, the model with largest capacity, has more concerning outputs. You’ll also notice that higher temperatures yield more concerning outputs. Not all hyperparameter choices yield concerning outputs -- in fact, most don't. (4/8)
It is mind-boggling to me that the hyperparameter choices are obfuscated from people who view model outputs. Second, why aren’t model outputs annotated with the log probs? Third, we need education on programming these intelligent systems and releasing them “in the wild.” (5/8)
Like SSL for networking, we need a community-trusted verification process to release systems built on top of GPT-3 and other intelligent systems. We also need to communicate this with end users through thoughtful UI/UX. The SSL analog is a "lock" icon in the URL bar. (6/8)
I am happy that OpenAI’s UI to experiment with GPT-3 is thoughtful. Log probs are annotated with colors and toxic outputs are flagged. They advise programmers not to publish sensitive outputs. But this is only within the playground and doesn't scale. (7/8)
If you’re releasing intelligent systems for people to interact with, you have a moral responsibility. Educate people on how to use your tool (require training). Have an opinionated framework that prioritizes safety (require open-sourced code, priming examples, and hparams). (8/8)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Shreya Shankar

Shreya Shankar Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sh_reya

16 Oct
i love this thought experiment. i played piano & violin growing up. i dreaded Hanon & Rode exercises. i wondered why i had to learn boring pieces from different time periods. but looking back i am so grateful; my music education really shaped my learning process.
from a young age, i was exposed to our current definition of popular music from different time periods. i learned to build intuition for how music changes over time. being the most technically impressive (i.e. Paganini) isn't always the trendiest skill set.
in a violin lesson at age 12, i learned that tools have the biggest influence on innovation. in the Baroque era, bows were shaped differently & didn't support spiccato strokes. harpsichord music didn't really support dynamics (soft or loud) because of engineering limitations.
Read 7 tweets
8 Oct
In good software practices, you version code. Use Git. Track changes. Code in master is ground truth.

In ML, code alone isn't ground truth. I can run the same SQL query today and tomorrow and get different results. How do you replicate this good software practice for ML? (1/7)
Versioning the data is key, but you also need to version the model and artifacts. If an ML API returns different results when called the same way twice, there can be many sources to blame. Different data, different scaler, different model, etc. (2/7)
“Versioning” is not enough. How do you diff your versions? For code, you can visually inspect the diff on Github. But the size of data and artifacts >> size of a company’s codebase. You can't visually and easily inspect everything. (3/7)
Read 7 tweets
23 Sep
every morning i wake up with more and more conviction that applied machine learning is turning into enterprise saas. i’m not sure if this is what we want (1/9)
why do i say saas? every ML company is becoming a dashboard and API company, regardless of whether the customer asked for a dashboard or not. there’s this unspoken need to “have a product” that isn’t a serialized list of model weights & mechanisms to trust model outputs (2/9)
why is saas not perfectly analogous? “correctness” at the global scale is not binary for ML, but it is for software. i get the need to package ML into something that sells, but i’m not sure why it needs to replicate the trajectory of enterprise saas (3/9)
Read 9 tweets
20 Sep
Some things about machine learning products just baffle me. For example, I'm curious why computer vision APIs release "confidence scores" with generated labels. What's the business value? Does this business value outweigh potential security concerns? (1/4)
For context, here's what Cloud Vision and Azure Vision return for some image I pulled from Google Images. Notice the "confidence scores" (a.k.a. probabilities) assigned to each label. (2/4) ImageImage
Wouldn't publishing these confidence scores make it easier for an adversary to "steal" the model (ex: fine-tune a model to min. KL div between softmaxed model outputs and API-assigned scores)? Or even attack the model because you could approximate what its parameters do? (3/4)
Read 4 tweets
13 Sep
I have been thinking about @cHHillee's article about the state of ML frameworks in @gradientpub for almost a year now, as I've transitioned out of research to industry. It is a great read. Here's a thread of agreements & other perspectives:

thegradient.pub/state-of-ml-fr…
I do all my ML experimentation *on small datasets* in PyTorch. Totally agreed with these reasons to love PyTorch. I switched completely to PyTorch in May 2020 for my research. I disagree that TF needs to be more afraid of the future, though. Image
In industry, I don't work with toy datasets. I work with terabytes of data that come from Spark ETL processes. I dump my data to TFRecords and read it in TFData pipelines. If I'm already in TF, I don't care enough to write my neural nets in PyTorch.
Read 11 tweets
4 Sep
Beginning a thread on the ML engineer starter pack (please contribute):

- ”example spark config” stackoverflow post
- sklearn documentation
- hatred for Airflow DAGs
- awareness of k8s and containers but no idea how to actually use them
- “the illustrated transformer” blog post
- silent numpy broadcasting errors
- cursing US-West-2 for not having any instances available
- reviewing data scientists’ code & wishing it was cleaner
- reviewing software engineers’ code & wishing your code could be half as good as theirs
- battered copy of Martin Kleppman’s “Designing Data-Intensive Applications”
- weekly emails from ML tooling startups trying to sell their products
- spending 10x time cleaning data as training models on the data
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!