Nora Belrose Profile picture
Jun 7, 2023 7 tweets 4 min read Read on X
Ever wanted to mindwipe an LLM?

Our method, LEAst-squares Concept Erasure (LEACE), provably erases all linearly-encoded information about a concept from neural net activations. It does so surgically, inflicting minimal damage to other concepts. 🧵
arxiv.org/abs/2306.03819
(2/7) Concept erasure is an important tool for fairness, allowing us to prevent features like race or gender from being used by classifiers when inappropriate, as well as interpretability, letting us study the causal impact of features on a model's behavior. Image
(3/7) We also introduce a procedure called “concept scrubbing,” which applies LEACE to all layers of a deep network simultaneously. We find LLM performance depends heavily on linear part-of-speech information, while erasing a random feature has little to no effect. Image
(4/7) We prove that LEACE is the smallest possible linear edit, in the least squares sense, needed to erase a concept— all previous concept erasure methods have been suboptimal. We also show empirically that it’s less destructive to model performance than previous methods. Image
(5/7) LEACE has a closed-form solution that fits on a T-shirt. This makes it orders of magnitude faster than popular concept erasure methods like INLP and R-LACE, which require gradient-based optimization. And the solution can be efficiently updated to accommodate new data. Image
(6/7) We’ve released all code needed to reproduce our results at github.com/EleutherAI/con…! You can also `pip install concept-erasure` to get the PyPI package.
(7/7) LEACE wouldn’t be possible without @TheDavidSJ, who proved the theorem that led to this paper. I'd also like to thank our other coauthors @ravfogel @ryandcotterell @EdwardRaffML @BlancheMinerva! Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Nora Belrose

Nora Belrose Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @norabelrose

Feb 4
MLPs and GLUs are hard to interpret, but they make up most transformer parameters.

Linear and quadratic functions are easier to interpret.

We show how to convert MLPs & GLUs into polynomials in closed form, allowing you to use SVD and direct inspection for interpretability 🧵Image
We use SVD on linearized MLPs to generate adversarial examples, which transfer back to the original MLP!

This shows that our approximants are capturing the out-of-distribution behavior of the original network.

How is that possible? Image
We compute the least-squares approximation to the original MLP, assuming the inputs are Gaussian mixture distributed.

Gaussianity lets us make some assumptions about the inputs (mean & covariance), without overfitting to it. We capture features of the network, not the data.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(