Discover and read the best of Twitter Threads about #icml2020

Most recents (11)

There has been some great work on framing AI ethics issues as ultimately about power.

I want to elaborate on *why* power imbalances are a problem. 1/
*Why* power imbalances are a problem:
- those most impacted often have least power, yet are the ones to identify risks earliest
- those most impacted best understand what interventions are needed
- often no motivation for the powerful to change
- power tends to be insulating 2/
The Participatory Approaches to ML workshop at #ICML2020 was fantastic. The organizers highlighted how even many efforts for fairness or ethics further *centralize power*

3/
Read 11 tweets
Important work is happening in *Participatory ML* and in recognizing that AI ethics is about the *distribution of power*. I want to create a thread linking to some of this work 1/

Don’t ask if artificial intelligence is good or fair, ask how it shifts power. @radical_ai_'s essay in Nature is a great place to start: 2/

Also check out @radical_ai_'s talk from @QueerinAI #NeurIPS2019 on how ML shifts power, and on the questions we should be asking ourselves: 3/

Read 13 tweets
Datasets (particularly benchmarks) are infrastructure: a foundation for other tools & tech, tending to seep into the background, shaped by specific aims, seeming natural from one perspective but jarring from another
@cephaloponderer @alexhanna @amironesei
arxiv.org/abs/2007.07399 2. Data Infrastructure In this work, we situate our understa
Focusing on *transparency* of a ML system without plausible actions of being able to change aspects of that system
are a Pyrrhic victory. *Contestability*, however, allows us
to critically engage within the system. to focus on the contingent, historical, and value-laden work
Let's move beyond insufficient training data as
the sole "solution" to discriminatory outcomes.

Gathering
more training data from populations which are already extensively surveilled ignores how data-gathering operations can serve as another
form of "predatory inclusion" Second, we aim to push AI ethics conversation about data bey
Read 4 tweets
My first #ICML2020 was different from my n-th #acl2020nlp, but, or perhaps because of that, I did try to look for interesting papers that I could relate to but that might still teach me something new!

Papers, in roughly chronological order---each with a short summary :) [1/42]
“How Good is the Bayes Posterior in Deep Neural Networks Really?” (Florian Wenzel/@flwenz, Kevin Roth, @BasVeeling, Jakub Swiatkowsk, Linh Tran, @s_mandt, @JasperSnoek, @TimSalimans, @RJenatton, Sebastian Nowozin)

arxiv.org/abs/2002.02405


#ICML2020 [2/42]
[“How Good is the Bayes Posterior in Deep Neural Networks Really?” cont.]

As shown in @andrewgwils’ awesome tutorial, tempering works, probably because of bad priors?

#ICML2020 [3/42]
Read 43 tweets
Power imbalances in Machine Learning. Technical community has control over:
- what data is collected
- what data is used
- how much to reveal about training dataset
- choosing models
- interpreting outputs
- assessment & verification
- deployment of models @JamelleWD #PAML2020
Beyond Fairness & Ethics: Towards Agency & Shifting Power by @JamelleWD of @Data4BlackLives

Watch here: slideslive.com/38930952/beyon… #ICML2020 #PAML2020
Reclaim data narratives with intention

Ex: covid-19 racial disparities

False, harmful narrative: due to genetics or Black people being uncareful or unhealthy

Reclaimed, accurate narrative: racism NOT race is the root, unequal access, lower quality of care, structural barriers, Reclaiming data narratives with intention Data for Black Liv
Read 7 tweets
Want to generate black box explanations that are more stable and are robust to distribution shifts? Our latest #ICML2020 paper provides a generic framework that can be used to generate robust local/global linear/rule-based explanations.
Paper: proceedings.icml.cc/static/paper_f…. Thread ↓ Image
Many existing explanation techniques are highly sensitive even to small changes in data. This results in: 1) incorrect and unstable explanations, (ii) explanations of the same model may differ based on the dataset used to construct them.
To address the above shortcomings, we propose a framework based on adversarial training. We propose and optimize a minimax objective that aims to construct explanations with highest fidelity over a set of possible distribution shifts.
Read 6 tweets
Compared to virtual #ICLR2020, I found virtual #ICML2020 lacks a few features.

1. Papers are scheduled for 2 days. instead of 1. I prefer to collect all papers of interest for the day and then attend only those. Now I need to keep in mind if I attended a poster previously.

1/n
2. Videos are 10-15 minutes long. This forces me to *really* want to attend the poster. Having tl;dr version (<5 min) or short/long videos as in ICLR is preferred.
3. 1 hour of the poster session is often not enough for the amount of content proposed. In session 1, I have 5+ posters, 15 minutes each. It physically doesn't fit into the slot.
Read 4 tweets
Excited to share our #ICML2020 paper on fair generative modeling! We present a scalable approach for mitigating dataset bias in models learned on various datasets without explicit annotations. 👇

w/ @adityagrover_ @_smileyball Trisha Singh @StefanoErmon
arxiv.org/abs/1910.12008
Generative models can be trained on large, unlabeled data sources.

If we naively mix all datasets, a trained model will propagate or amplify the bias in this mixture. On the other hand, labeling all attributes of interest may be impossible or super expensive. (2/7)
We use one dataset as a reference (from external prior knowledge) and let all other datasets be biased w.r.t. this reference. Our idea is to construct an *importance weighted* dataset for learning. Here, weights are the density ratio between the biased and reference distributions
Read 7 tweets
Humans have a remarkable understanding of which states afford which behaviors. We provide a framework that enables RL agents to represent and reason about their environment through the lens of affordances bit.ly/31qjlSv

#ICML2020 paper from my internship @DeepMind 1/4
In this work, we develop a theory of affordances for agents who learn and plan in Markov Decision Processes. Affordances play a dual role. On one hand, they allow faster planning. On the other hand, they facilitate more efficient learning of transition models from data. 2/4
We establish these properties through theoretical results as well as illustrative examples. We also propose an approach to learn affordances from data and use it to estimate partial models that are simpler and generalize better. 3/4
Read 4 tweets
Humans have a remarkable understanding of which states afford which behaviors. We provide a framework that enables RL agents to represent and reason about their environment through the lens of affordances bit.ly/31qjlSv

#ICML2020 paper from my internship @DeepMind 1/4
In this work, we develop a theory of affordances for agents who learn and plan in Markov Decision Processes. Affordances play a dual role. On one hand, they allow faster planning. On the other hand, they facilitate more efficient learning of transition models from data. 2/4
We establish these properties through theoretical results as well as illustrative examples. We also propose an approach to learn affordances from data and use it to estimate partial models that are simpler and generalize better. 3/4
Read 4 tweets
Weakly weekly quiz, new installment! I'm assuming everyone is very busy with either the #FOCS2020 deadline, the #ICML2020 reviews, or the current global health crisis and juggling with 5 toddlers & 7 Zoom online classes, so I'll keep it short.

Adaptivity 🗘 and testing 🔎.

1/7
Recall testing 🔎: you have a notion of distance d(x,y), a parameter ε, and "access" to an object x (function f, graph G, proba. distribution p...); and in mind, a property ℘. Goal: does x have ℘, or do we have d(x,y) > ε for all y in ℘?
[x has ℘, or is ε-far from it]

2/7
Now, "adaptivity"? Well, to decide the above question, you have to access your object x by making 'queries' (fct eval, edge lookups, samples, etc.). If the queries are decided in advance: non-adaptive algo; if queries depends on the answers to previous ones: adaptive algo.

3/7
Read 8 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!