Amit Sharma Profile picture
Jul 20 9 tweets 6 min read
There is a lot of excitement about causal machine learning, but in what ways exactly can causality help with ML tasks?

In my work, I've seen four: enforcing domain knowledge, invariant regularizers, "counterfactual" augmentation & better framing for fairness & explanation. 🧵👇🏻
1)Enforcing domain knowledge: ML models can learn spurious correlations. Can we avoid this by using causal knowledge from experts?
Rather than causal graphs, eliciting info on key relationships is a practical way. See #icml2022 paper on how to enforce them arxiv.org/abs/2111.12490
2) Invariant regularizers: For out-of-distribution generalization, another way is to add regularization constraints.

Causality can help us find the correct constraint for any dataset. Also easy to show that no single constraint can work everywhere. Algo: arxiv.org/abs/2206.07837
3) Data augmentation: Causal models can also help generate better augmentations: the trick is to identify and vary only the spurious features, thus breaking the correlational patterns.

This work shows that such samples are optimal for OOD generalization: arxiv.org/abs/2006.07500
And to create such augmentations, we extend the GAN architecture to support a causal graph and provide an algorithm to construct counterfactuals for any image. Here's an example for the CelebA dataset. Paper: arxiv.org/abs/2009.08270
4)Fairness: Reasoning using causal graphs helps reveal the blind spots in current work.

E.g., majority of fairness literature ignores missing data and selection bias (e.g., training on those who got the loan). Such models will provably fail when deployed. arxiv.org/abs/2012.11448
Bonus: Enhancing privacy. It turns out that predictions from causal models are more differentially private than associational models (need less noise for the same epsilon), and as a result are robust to membership inference attacks. arxiv.org/abs/1909.12732
Joint work with @divyat09 @DashSaloni @emrek @jivatneet @ng_goel @shrutitople and Vineeth Balasubramanian.

If you are #icml2022 and would like to chat on these topics, please DM.
@divyat09 @DashSaloni @emrek @jivatneet @ng_goel @shrutitople Overall, using causality does not always mean that we need to switch to a graphical model. Instead, it can already improve existing practices around regularization, augmentation and responsible AI.

Curious to hear what other applications people have seen?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Amit Sharma

Amit Sharma Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @amt_shrma

Apr 10, 2019
Great to see the discussion on using DAGs versus potential outcomes for applied causal inference work with @yudapearl, @Jabaluck @autoregress @analisereal @PHuenermund

My take: DAGs and PO are compatible, and the best analyses benefit from using both. 1/7
In a KDD tutorial with @emrek, we outline how you can use DAGs and potential outcomes together for causal analysis and discuss empirical examples. 2/7 causalinference.gitlab.io/kdd-tutorial/
In fact, they are not only compatible, we can always represent a proof in one framework in another. For an identification strategy in a recent paper on recommendation systems, I was able to write the same proof using DAGs and without DAGs.
projecteuclid.org/euclid.aoas/15…
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(