Paul Liang Profile picture
Apr 15 4 tweets 7 min read
Are you working on federated learning over heterogeneous data? Use Vision Transformers as a backbone!
In our upcoming #CVPR2022 paper, we perform extensive experiments demonstrating the effectiveness of ViTs for FL:

paper: arxiv.org/abs/2106.06047
code: github.com/Liangqiong/ViT…
@vickyqu0 @yuyinzhou_cs @mldcmu @StanfordDBDS @StanfordAILab We find that ViTs are more robust to distribution shift, reduce catastrophic forgetting over devices, accelerate convergence, and reach better models.

Using ViTs, we are able to scale FL up to the edge-case of heterogeneity - 6000 & 45000 clients with only 1 sample per client!
@vickyqu0 @yuyinzhou_cs @mldcmu @StanfordDBDS @StanfordAILab By virtue of their robustness and generalization properties, ViTs also converge faster with fewer communicated parameters, which makes them appealing for efficient FL.

ViTs can be used with optimization FL methods (FedProx, FedAvg-Share) to further improve speed & performance.
@vickyqu0 @yuyinzhou_cs @mldcmu @StanfordDBDS @StanfordAILab Try out our code on your FL tasks! With full comparisons between ResNets, ConvNets, & ViTs across many FL tasks: github.com/Liangqiong/ViT…

this was a fun collaboration led by @vickyqu0 and @yuyinzhou_cs with Yingda Feifei @eadeli @drfeifei @rubinqilab @StanfordDBDS @StanfordAILab

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Paul Liang

Paul Liang Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @pliang279

Apr 14
[11877 Advanced Topics in Multimodal ML] In week 11, the class formalized a taxonomy of dataset and model biases (social bias, annotator bias, shortcuts, spurious correlations) and proposed solutions to mitigate them in multimodal settings.

Notes here: cmu-multicomp-lab.github.io/adv-mmml-cours…
@lpmorency @LTIatCMU @mldcmu Some suggested papers:
Shortcut learning in deep neural networks nature.com/articles/s4225…
Measuring Social Biases in Grounded Vision and Language Embeddings aclanthology.org/2021.naacl-mai…
Multimodal datasets: misogyny, pornography, and malignant stereotypes arxiv.org/abs/2110.01963
@lpmorency @LTIatCMU @mldcmu A Case Study of the Shortcut Effects in Visual Commonsense Reasoning aaai.org/AAAI21Papers/A…
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets arxiv.org/abs/1908.07898
Read 7 tweets
Mar 3
[11877 Advanced Topics in Multimodal ML] In week 5’s session, the class aimed to define a taxonomy of multimodal reasoning: the (hierarchical) composition of unimodal and multimodal evidences into higher-level abstract concepts for prediction.
Notes here: cmu-multicomp-lab.github.io/adv-mmml-cours…
@mldcmu @LTIatCMU @lpmorency Some suggested papers:
CLEVRER: CoLlision Events for Video REpresentation and Reasoning arxiv.org/abs/1910.01442
Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning" arxiv.org/abs/2006.11524
@mldcmu @LTIatCMU @lpmorency Learning to Compose and Reason with Language Tree Structures for Visual Grounding arxiv.org/abs/1906.01784
Heterogeneous Graph Learning for Visual Commonsense Reasoning arxiv.org/abs/1910.11475
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(