Discover and read the best of Twitter Threads about #neurips2021

Most recents (24)

Algorithmic mechanism design (eg ad tech auctions such as Google, multi-sided platforms such as Uber) contains an inherent contradiction:
- theory of preserving autonomy & rationality of participants
- reality of opaquely generating & exploiting information asymmetries
@jakusg 1/
The point of an auction is to incentivize truthful disclosures of preference & valuation; by contrast, Google, acting as an auction platform & multi-sided mechanism designer, is effectively profiling market actors so that it can extract more surplus for itself. 2/ Google's technical designs and business strategies defy assu
In privately controlled & highly automated digital platforms, algorithmic market-like mechanisms simulate how a market might behave without necessarily including any of the features necessary to constitute a market, such as freedom to deal or knowable information rules. 3/ Using platformed applications of mechanism design as case st
Read 7 tweets
We'll be live-tweeting from our launch at #NeurIPS2021 today

Follow-along here!
@ericschmidt – Former CEO of Google, technologist, entrepreneur, and philanthropist:

“Nightingale is incredibly important. It is the first large database of images that is being organized around healthcare. We saw how well this worked with ImageNet in 2011.”
@ericschmidt “I believe, with the Nightingale team, this repository of never-before-seen images, tied to outcomes with labeled data, will lead to revolutionary new approaches.” - @ericschmidt
Read 39 tweets
How can we think about building systems not for individual end users, but for groups and communities? So much of computer science is limited by the assumption that there is an individual end-user who should be the benefactor of what we build.
- @marylgray keynote #NeurIPS2021 1/
ML gravitates to large scale data, even though it has rarely had a robust account of where that data comes from & under what conditions, and is almost always deeply disconnected from the social relationships that produced it. 2/
Data is power. Because data has become so powerful, we must transfer the tools of data collection, aggregation, & sharing from engineers to the communities in society that carry the risks. 3/
Read 9 tweets
The first-ever #NeurIPS2021 workshop on the Political Economy of Reinforcement Learning Systems starts in just under 12hrs! Come join me, @sociotiose @FrankPasquale @mireillemoret @salome_viljoen_ @natashajaques @jakusg @FinaleDoshi @mlittmancs @math_rachel @jonathanstray ...
...@ivanadusparic @in4dmatics @NCPtarmigan and others for a discussion of how #ReinforcementLearning re-shapes societal institutions and disrupts power, money, and political forces. perls-workshop.github.io ...
... P.S. contact myself or @sociotiose if you are interested in attending!
Read 4 tweets
Tomorrow at #NeurIPS we’re launching Nightingale Open Science, a computing platform giving researchers access to massive new health imaging datasets

We hope Nightingale will help solve some of the biggest medical problems of our time

What makes these datasets special? (1/8)
Our datasets are curated around medical mysteries—heart attack, cancer metastasis, cardiac arrest, bone aging, Covid-19—where machine learning can be transformative

We designed these datasets with four key principles in mind: (2/8)
1. Each dataset begins with a large collection of medical images: x-rays, ECG waveforms, digital pathology (and more to come)

These rich, high-dimensional signals are too complex for humans to see or fully process—so machine vision can add huge value (3/8)
Read 8 tweets
Attending @NeurIPSConf #NeurIPS2021 today?
Interested in algorithmic reasoning, implicit planning, knowledge transfer or bioinformatics?
We have 3 posters (1 spotlight!) in the poster session (4:30--6pm UK time) you might find interesting; consider stopping by! Details below: 🧵
(1) "Neural Algorithmic Reasoners are Implicit Planners" (Spotlight); with @andreeadeac22, Ognjen Milinković, @pierrelux, @tangjianpku & Mladen Nikolić.

Value Iteration-based implicit planner (XLVIN), which successfully breaks the algorithmic bottleneck & yields low-data gains.
(2) "How to transfer algorithmic reasoning knowledge to learn new algorithms?"; with Louis-Pascal Xhonneux, @andreeadeac22 & @tangjianpku.

Studying how to successfully transfer algorithmic reasoning knowledge when intermediate supervision traces are missing.
Read 4 tweets
(1/6) Excited to share our #NVIDIAAI #NeurIPS2021 paper “EditGAN: High-Precision Semantic Image Editing” (nv-tlabs.github.io/editGAN/), achieving an unprecedented level of detail in GAN-based image editing!
w/ @karsten_kreis , @lidaiqing , @seungkim0123 , @abtorralba , @FidlerSanja
(2/6) EditGAN builds on a GAN framework that jointly models images and their semantic segmentation (nv-tlabs.github.io/datasetGAN/). Manually modifying segmentations is easy. This allows us to find editing vectors in latent space that enable high-precision image editing.
(3/6) EditGAN allows us to learn an arbitrary number of editing vectors, which can be directly applied to other images at interactive rates. We show that EditGAN can manipulate images with an unprecedented level of detail and freedom while preserving full image quality.
Read 6 tweets
DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning

SSL is a promising technology, but current methods are field-specific. Can we find general algorithms that can be applied to any domain?

🌐: dabs.stanford.edu
📄: arxiv.org/abs/2111.12062

🧵👇 #NeurIPS2021

1/
Self-supervised learning (SSL) algorithms can drastically reduce the need for labeling by pretraining on unlabeled data

But designing SSL methods is hard and can require lots of domain-specific intuition and trial and error

2/
We designed DABS to drive progress in domain-agnostic SSL

Our benchmark addresses three core modeling components in SSL algorithms:

(1) architectures
(2) pretraining objectives
(3) transfer methods

3/
Read 13 tweets
One of my favorite parts of grad school is learning about all the awesome work my friends are doing. I thought I'd make a thread of some of it (most of them the first paper of a PhD!) that's coming out this week at #NeurIPS2021. Apologies in advance if I forgot some:
First up: An elegant regularization technique for stabilizing Q-functions by @alexlioralexli: proceedings.neurips.cc/paper/2021/fil…. I really like the idea of Fourier features and it was neat to see them applied to RL. The NTK-based analysis taught me a bunch as well.
Next, a parallelized training procedure for DEQs and their inputs by @SwaminathanGur3: arxiv.org/abs/2111.13236. Full of solid optimization theory leveraged to provide some really impressive empirical results. Implicit models are getting more impressive every day.
Read 7 tweets
1/6 If you are interested in causal inference & machine learning, I am excited to share some of the latest work of the causal artificial intelligence group that will appear @NeurIPSConf this year. We welcome you to stop by and chat with us at the following times… #NeurIPS2021
2/6 Tue 11:30 am (EST) “The Causal-Neural Connection: Expressiveness, Learnability, and Inference”, with Kevin Xia, Kai-Zhan Lee, and Yoshua Bengio. Link: causalai.net/r80.pdf.
3/6 Tue 11:30 am (EST) + Oral Fri 7:40 pm (EST) “Sequential Causal Imitation Learning with Unobserved Confounders”, with @danielkumor & Junzhe Zhang. Link: causalai.net/r76.pdf.
Read 6 tweets
Want to dive into #NeurIPS2021 but don't know where to start?

Here're some ideas! A thread🧵👇
1. "A 3D Generative Model for Structure-Based Drug Design" is one of the multiple papers at NeurIPS about drug discovery using neural networks.

This model generates molecules that bind to a specific protein binding site.

By Shitong Luo et al.

papers.nips.cc/paper/2021/has…
2. "The Emergence of Objectness: Learning Zero-shot Segmentation from Videos" by Runtao Liu et al.

Leveraging clever self supervision with videos to segment objects without labels.

papers.nips.cc/paper/2021/has…
Read 12 tweets
#NeurIPS2021 spotlight: Optimal policies tend to seek power.

Consider Pac-Man: Dying traps Pac-Man in one state forever, while staying alive lets him do more things. Our theorems show that for this reason, for most reward functions, it’s optimal for Pac-Man to stay alive. 🧵:
We show this formally through *environment symmetries*. In this MDP, the visualized state permutation ϕ shows an embedding of the “left” subgraph into the “right” subgraph. The upshot: Going “right” leads to more options, and more options -> more ways for “right” to be optimal.
We provide the first formal theory of the statistical incentives of optimal policies, which applies to all MDPs with environment symmetries. Besides showing that keeping options is more likely, we also show it is more powerful. Thus, “optimal policies tend to seek power.”
Read 5 tweets
Q. What does Noether’s theorem tell us about the “geometry of deep learning dynamics”?
A. We derive Noether’s Learning Dynamics and show:
”SGD+momentum+BatchNorm+weight decay” = “RMSProp" due to symmetry breaking!

w/ @KuninDaniel
#NeurIPS2021 Paper: bit.ly/3pAEYdk
1/
@KuninDaniel Geometry of data & representations has been central in the design of modern deepnets.
e.g., #GeometricDeepLearning arxiv.org/abs/2104.13478 by @mmbronstein, @joanbruna, @TacoCohen, @PetarV_93

What are the geometric design principles for “learning dynamics in parameter space”?
2/
We develop Lagrangian mechanics of learning by modeling it as the motion of a particle in high-dimensional parameter space. Just like physical dynamics, we can model the trajectory of discrete learning dynamics by continuous-time differential equations.
3/
Read 10 tweets
Presenting SEAL: Self-supervised Embodied Active Learning! #NeurIPS2021

SEAL is a self-supervised framework to close the action-perception loop. It improves perception & action models by just moving in the physical world w/o any human supervision.

devendrachaplot.github.io/projects/seal

1/N
SEAL consists of two phases, Action, where we learn an active exploration policy, and Perception, where we train the Perception Model on data gathered using the exploration policy and labels obtained using spatio-temporal label propagation.

2/N Image
Learning Action: We define an intrinsic motivation reward called Gainful Curiosity to train the active exploration policy to learn the behavior of maximizing exploration of objects with high confidence.

3/N
Read 6 tweets
Congratulations to the authors of “Deep RL at the Edge of the Statistical Precipice”, a #NeurIPS2021 Outstanding Paper (goo.gle/3dhrhtM)! You can learn more about it in the blog post below, and we look forward to sharing more of our research at this year’s @NeurIPSConf.
Additional congratulations to the authors of "Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research" (goo.gle/3Il694n) for being awarded the @neuripsconf Datasets and Benchmarks Best Paper Award
And, last but not least, another congratulations to Matthew Hoffman and co-authors of “Online Learning for Latent Dirichlet Allocation” (goo.gle/3pmpw4l), recipient of the 2021 @neuripsconf Test of Time Award
Read 3 tweets
Paper awards for @NeurIPSConf have been announced!🎉#NeurIPS2021 blog.neurips.cc/2021/11/30/ann…

Congrats to all the winners, I'll link to the Outstanding Paper Awards 🧵

1. A Universal Law of Robustness via Isoperimetry, by @SebastienBubeck & @geoishard.

arxiv.org/abs/2105.12806 (1/n)
Outstanding Paper Award 2. On the Expressivity of Markov Reward, by @dabelcs, @wwdabney, @aharutyu, @Mark_Ho_, @mlittmancs, Doina Precup, and Satinder Singh.

arxiv.org/abs/2111.00876 (2/n)
Outstanding Paper Award 3. Deep Reinforcement Learning at the Edge of the Statistical Precipice, by @agarwl_, @max_a_schwarzer, @pcastr, @AaronCourville, @marcgbellemare.

arxiv.org/abs/2108.13264 (3/n)
Read 6 tweets
Can we make progress on nonlinear blind source separation by drawing inspiration from the field of causal inference?

Introducing our #NeurIPS2021 paper "Independent mechanism analysis, a new concept?", joint work with @JKugelgen, @VStimper, @bschoelkopf and @MichelBesserve

1/9 Image
We consider the "cocktail party problem", i.e. blind source separation (BSS):

In a room with many speakers, recording devices pick up (nonlinear) mixtures of what each person says. Given the recorded mixtures, we would like to reconstruct (separate) the original sources.

2/9 Image
This can be formalised through nonlinear ICA, assuming the sources are statistically independent.

Problem: Unsupervised nonlinear ICA is not identifiable ➡️ Estimating independent components does not solve BSS! This can be shown by constructing "spurious" solutions.

3/9 Image
Read 9 tweets
After a hiatus, a new series of blogs posts. Do differential geometry and algebraic topology sound too exotic for ML? In recent works, we show that tools from these fields bring a new perspective on graph neural networks

First post in the series:

towardsdatascience.com/graph-neural-n…
Based on recent works with @CristianBodnar @ffabffrasca @kneppkatt @wangyg85 @pl219_Cambridge @guidomontufar @b_p_chamberlain @migorinova @stefan_webb @emaros96 @aittalam James Rowbottom, Jake Topping, Xiaowen Dong, Francesco Di Giovanni
Cool animation of Cora graph evolution by James Rowbottom
Read 6 tweets
Why spiking neural networks?

There are interesting prospects for engineering applications, but let's not forget that spiking neurons are precise models of biological neurons.

In a paper accepted at #NeurIPS2021 we use back-prop in spiking RNNs to fit cortical data 1/8
Given that the biological network is

(1) strongly recurrent and
(2) some neurons are not recorded,

this is a profound statistical problem for which the best existing formalizations are still based on GLMs with the max. likelihood (MLE, @jpillowtime). 2/8
A limitation of MLE training is to be conditioned on recorded data only. So when one simulates the fitted network, it explodes as soon as it's different from the data.

This is why back-prop in spiking RNNs is useful: one can now train the model using simulated spikes! 3/8
Read 11 tweets
Happy to share that our paper “On Calibration and Out-of-domain Generalization” is accepted to #NeurIPS2021!

Congratulations to the wonderful students who came up with the idea and led the work on this paper:
@wald_yoav @amir_feder @d_greenfeld

arxiv.org/abs/2102.10395

1/15
tl;dr: Making a classifier calibrated over multiple training domains is an easy and powerful way for better generalization to unseen domains
2/15
Say you have patient data from several hospitals which differ in their demographics, imaging machines etc, and you want to learn a classifier that generalizes to new unseen hospitals. This is the problem of out-of-domain (OOD) generalization, ubiquitous in NLP, vision & more
3/15
Read 15 tweets
Delighted to announce two papers we will present at #NeurIPS2021: on XLVIN (spotlight!), and on transferable algorithmic reasoning.

Both summarised in the wonderful linked thread from @andreeadeac22!

I'd like to add a few sentiments on XLVIN specifically... thread time! 🧵1/7
You might have seen XLVIN before -- we'd advertised it a few times, and it also featured at great length in my recent talks.

The catch? The original version of XLVIN has been doubly-rejected, from both ICLR (in spite of all-positive scores) and ICML. 2/7

However, this is one of the cases in which the review system worked as intended! Even AC-level rejections can be a blessing in disguise.

Each review cycle allowed us to deepen our qualitative insight into why exactly does XLVIN work as intended... 3/7
Read 7 tweets
Here's why I like ✨graph cellular automata✨:

1. Decentralized / emergent computation on graphs is a fundamental principle of Nature
2. We can control their behavior using GNNs
3. They make oscillating bunnies sometimes 🐰

Soon at #NeurIPS2021

arxiv.org/abs/2110.14237
In the paper, we explore the most general possible setting for CA and show that we can learn arbitrary transition rules with GNNs.
Possible applications of this are in swarm optimization, neuroscience, epidemiology, IoT, traffic routing... you name it.
I have always been fascinated by CA, and I cannot understate how excited I am about this paper and the idea of emergence.

Keep an eye out for this topic, because the community is growing larger every day and doing lots of amazing things.
Read 4 tweets
Do we still need SGD/Adam to train neural networks? Based on our #NeurIPS2021 paper, we are one step closer to replacing hand-designed optimizers with a single meta-model. Our meta-model can predict parameters for almost any neural network in just one forward pass. (1/n)
For example, our meta-model can predict all ~25M parameters of a ResNet-50 and this ResNet-50 will achieve ~60% on CIFAR-10 without any training. When our meta-model was training, it did not observe any network close to ResNet-50. (2/n)
We can also predict all parameters for ResNet-101, ResNet-152, Wide-ResNets, Visual Transformers, you name it. We use the same meta-model to do that and it works on ImageNet too. (3/n)
Read 12 tweets
Introducing DORA, an AI that learns no-press Diplomacy from scratch with no human data! Our #NeurIPS2021 paper shows DORA is superhuman in 1v1 Diplomacy. In 7p Diplomacy, the results are more subtle. Joint work w/ @anton_bakhtin, David Wu, and @adamlerer: arxiv.org/abs/2110.02924
@anton_bakhtin @adamlerer DORA solves Diplomacy's combinatorial action space problem. It also suggests self play isn't enough to be superhuman in 7-player Diplomacy due to many equilibria, unlike chess, go, and 6-player poker. That motivates Diplomacy as an excellent domain for researching multi-agent AI.
You can play against DORA on webdiplomacy.net! Just launch a bot game for the France vs. Austria variant.
Read 3 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!