When and why does king - man + woman = queen? In my #ACL2019 paper with @DavidDuvenaud and Graeme Hirst, we explain what conditions need to be satisfied by a training corpus for word analogies to hold in a GloVe or skipgram embedding space. 1/4
In turn, our theory provides 1. An information theoretic interpretation of Euclidean distance in skipgram and GloVe embedding spaces. 2. Novel justification for the surprising effectiveness of using addition to compose word vectors. 2/4
3. A formal proof of the intuitive explanation of word analogies, as proposed by Pennington, @RichardSocher, and @chrmanning in the GloVe paper.
Most importantly, we provide empirical evidence in support of our theory, making it much more tenable than past explanations. 3/4
Special thanks to @omerlevy_ and @yoavgo for their helpful comments on an early draft of this paper, as well @chloepouprom, KP, and @Allen_A_N for their comments on the blog post! 4/4
• • •
Missing some Tweet in this thread? You can try to
force a refresh
📢The problem in model alignment no one talks about — the need for preference data, which costs $$$ and time!
Enter Kahneman-Tversky Optimization (KTO), which matches or exceeds DPO without paired preferences.
And with it, the largest-ever suite of feedback-aligned LLMs. 🧵
But first, what makes alignment work? Among methods that directly optimize preferences, the majority of gains <30B come from SFT.
Even a dummy one-step PPO that uses +1/-1 rewards works very well.
DPO is uniquely good at the 30B scale, however. 2/
But *why* do they work?
We find that alignment methods impute a utility function to humans.
These imputed functions have many qualities of those empirically derived by Kahneman & Tversky in their Nobel Prize-winning work on how humans make decisions about uncertain outcomes. 3/
📢 Models like #ChatGPT are trained on tons of human feedback. But collecting this costs $$$!
That's why we're releasing the Stanford Human Preferences Dataset (🚢SHP), a collection of 385K *naturally occurring* *collective* human preferences over text. huggingface.co/datasets/stanf…
Given some context and two possible responses, SHP preferences reflect the helpfulness of one response over another.
The preferences are over responses to questions/instructions in 18 domains, from cooking to legal advice, drawn from Reddit.
They were inferred from the simple observation that if comment A was written after B but has a higher score despite getting less visibility, then ostensibly A > B.
If A was written before B, then we can't conclude this -- the higher score could have come from more visibility!
Shapley Values are a solution to the credit assignment problem in cooperative games -- if 10 people work together to win some reward, how can it be equitably distributed?
For this reason, they've become a popular kind of explanation in ML. 2/
Shapley Values have been used to explain the importance of individual features, embeddings, and neurons.
@GhorbaniAmirata and @james_y_zou have even used them to value training data points.
In NLP though, attention-based explanations and leave-one-out still predominate. 3/
There's been some confusion over what Microsoft's "exclusive license" really means here.
While I can't speak for OpenAI, exclusive licenses generally grant exclusivity *within some specific context*. So no, Microsoft won't be the only one able to use GPT3. That said ...
My guess is that only MS will have access to the underlying model, while everyone will have to go through the API and be at the whims of whatever terms are set by OpenAI.
This is big -- if you build a product on top of GPT3, your ability to scale will depend on OpenAI's willingness to increase your throughput, which in turn will depend on the terms of their agreement with MS. Not a great situation to be in if you're directly competing with MS.
Background: Large NLP datasets don't come with annotations for protected attributes (e.g., gender). To test for classification bias, one typically annotates a small sample of data (typically < 5K). WinoBias and WinoGender are great examples of these bias-specific datasets. 2/
Intuitively, the less data we annotate, the less certain we are that our estimate is close to the true bias. But how can we quantify this uncertainty? 3/