Discover and read the best of Twitter Threads about #AAAI2022

Most recents (7)

There seems to be an almost willful confusion about the need and role for explainability of #AI systems on #AI twitter.

Contrary to the often polarizing positions, it is neither the case that we always need explanations nor is it the case that we never need explanations. 🧵1/
We look for explanations of high level decisions of (what for us are) explicit knowledge tasks; and where contestability and collaboration are important.

We rarely look for explanations of tacit knowledge/low level control decisions. 2/
I don't need explanation on why you see a dog in a picture; why you put your left foot 3 mm ahead of your left, or why facebook recommends me yet another page.

I do want one if am denied a loan, or I need a better model of you so I can coordinate with you. 3/
Read 14 tweets
3rd workshop on Artificial Intelligence Diversity, Belonging, Equity, and Inclusion (AID BE I) at #AAAI: a livetweet thread by @banazir

#DiversityInAI #DiverseInAI
@RealAAAI @WiMLworkshop @black_in_ai @_LXAI @QueerinAI @AiDisability #IndigenousInAI
@wimlds @BlackWomenInAI

1/🧵 Screenshot: Zoom attendees ...
Welcoming remarks from @banazir:

There will again be a special issue of Proceedings of Machine Learning Research (#PMLR, an imprint of @JmlrOrg's Journal of #MachineLearning #research) on this workshop.

Video recordings of the workshop will be at DiverseInAI.org.

2/🧵
First presentation: "Hello* - A Beginner's Guide to the Conference Galaxy" - Bethany Chamberlain, Dovile Juodelyte and Veronika Cheplygina

@chamberlain_ba @DrVeronikaCH

Great use of @Mentimeter!

3/🧵 Screenshot of cover slide a...
Read 69 tweets
Super excited to announce that our paper “Clustering with UMAP: Why and How Connectivity Matters” has been accepted for a presentation at the @GclrW (@AAAI 2022) arxiv.org/abs/2108.05525) #AAAI2022
1/10
What makes a good topological structure for dimensionality reduction? What started off as @suzyahyah and me trying to visualize high dim datasets ended in us finding an improvement to the topologies used in the graph-based UMAP (McInnes, Healy, Melville): a mutual kNN graph!
2/10
In UMAP, a k-NN graph is used to generate the initial topological representation of a dataset. However, previous works has shown that a kNN graph can capture noisy links and may not be an accurate representation for a high dimensional dataset.
3/10
Read 10 tweets
Can memory-based meta-learning not only learn adaptive strategies 💭 but also hard-code innate behavior🦎? In our #AAAI2022 paper @sprekeler & I investigate how lifetime, task complexity & uncertainty shape meta-learned amortized Bayesian inference.

📝: arxiv.org/abs/2010.04466
We analytically derive the optimal amount of exploration for a bandit 🎰 which explicitly controls task complexity & uncertainty. Not learning is optimal in 2 cases:

1⃣ Optimal behavior across tasks is apriori predictable.
2⃣ There is on avg not enough time to integrate info⌛️
🧑‍🔬 Next, we compared the analytical solution to the amortized Bayesian inference meta-learned by LSTM-based RL^2 agents 🤖

We find that that memory-based meta-learning is indeed capable of learning to learn and not to learn (💭/🦎) depending on the meta-train distribution.
Read 5 tweets
Happy to share that our work (w/ @pinyuchenTW) "Vision Transformers are Robust Learners" got accepted to #AAAI2022 (oral).

Paper: arxiv.org/abs/2105.07581
Code: github.com/sayakpaul/robu…

1/
In this work, we investigate the robustness aspects of ViTs and find that they are significantly better when exposed to distribution shifts, adversarial perturbations (and more) than CNNs.

2/
Thanks to @GoogleDevExpert program for providing a generous amount of @googlecloud credits that supported our experiments.
Read 3 tweets
Excited to share 5 recent papers from our research group (UVA Sigma lab haifeng-xu.com/sigma/pub.html) accepted to #AAAI2022, which all revolve around #MachineLearning in multi-agent setups!
(1) In "Learning the Optimal Recommendation from Explorative Users", we show how a recommender system and user can simultaneously and collaboratively learn in order to reach a globally optimal user recommendation.
(2) Worrying about attacks to ML algorithms? In "Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification", we show that adversarial attacks to stochastic bandits can be fully guarded by selectively verifying only O(log T) rewards.
Read 6 tweets
Happy to announce our @RealAAAI #aaai2022 paper titled "Accurate and Scalable Gaussian Processes for Fine-grained Air Quality Inference" with @patel_zeel_ @shivam15sahni, @PalakPurohit18 and @HarshP_ got accepted. Here is the 3 year old story on this work. (1/n)
@RealAAAI @patel_zeel_ @shivam15sahni @PalakPurohit18 @HarshP_ We started the work in 2018 with @deepaknaray12 a third year undergrad at @cse_iitgn. We dabbled around with various baselines for an year without much "external output" (2/n)
@RealAAAI @patel_zeel_ @shivam15sahni @PalakPurohit18 @HarshP_ @deepaknaray12 @cse_iitgn In the summer of 2019, @ApoorvAgnihotr2 also joined the team. While exploring other baselines, we started studying Gaussian processes and ended up submitting an article to @distillpub later that summer. (3/n)
Read 18 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!