Yesterday, I ended up in a debate where the position was "algorithmic bias is a data problem".

I thought this had already been well refuted within our research community but clearly not.

So, to say it yet again -- it is not just the data. The model matters.

1/n
We show this in our work on compression.

Pruning and quantizing deep neural networks amplifies algorithmic bias.

arxiv.org/abs/2010.03058 and arxiv.org/abs/1911.05248
Work on memorization and variance of gradients (VoG) shows that hard examples are learnt later in training, and that learning rates impact what is learnt.

bit.ly/2N9mW2r, arxiv.org/abs/2008.11600

So, early stopping disproportionately impacts certain examples.
Models which are guaranteed to be differentially private introduce disparate impact on model accuracy.

arxiv.org/pdf/1905.12101… and
bit.ly/3dgOfCs
One of the reasons the model matters is because notions of fairness often coincide with how underrepresented features are treated.

Treatment of the long-tail appears to depend on many factors, including memorization bit.ly/3qnru3v, capacity and objective.
So, let's dissuade ourselves of the incorrect notion that the model is independent from considerations of algorithmic bias.

This simply isn't the case. Our choices around model architecture, hyper-parameters and objective functions all inform considerations of algorithmic bias.
These were a few quick examples -- but plenty of important scholarship I have not included in this thread -- including important work on the relationship between robustness and fairness. A welcome invite to add on additional work which is considering these important trade-offs.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Sara Hooker

Sara Hooker Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sarahookr

21 Nov 19
What does a pruned deep neural network "forget"?

Very excited to share our recent work w Aaron Courville, Yann Dauphin and @DreFrome

weightpruningdamage.github.io
At face value, deep neural network pruning appears to promise you can (almost) have it all — remove the majority of weights with minimal degradation to top-1 accuracy. In this work, we explore this trade-off by asking whether certain classes are disproportionately impacted.
We find that pruning is better described as "selective brain damage" -- performance on a tiny subset of classes and images is cannibalized in order to preserve overall performance. The interesting part is what makes certain images more likely to be forgotten...
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!