Parallel training Recap⬇️

1. The concept of parallel training
2. Impactful research paper
3. Open-source framework for parallel training

Thread👇
Paper proposing one of the 1st statistical metrics to effectively quantify the correct size of the training batch

3/⬇️
Horovod, not the dance though but the parallel training framework created by @UberEng

4/⬇️

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with TheSequence

TheSequence Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TheSequenceAI

21 May
Deep dive into "Scalable Private Learning with PATE" by @NicolasPapernot @_kunal_talwar_ @UlfarEr Shuang Song, Ilya Mironov, Ananth Raghunathan

It presents a Private Aggregation of Teacher Ensembles (PATE) method to ensure privacy in training datasets
Thread👇🏼 🔎
Imagine that two different models, trained on two different datasets produce similar outputs

Then, their decision does not reveal information about any single training example

And this is another way to say it ensures the privacy of the training data
2/⬇️
PATE uses a perturbation technique that structures the learning process using an ensemble of teacher models communicating their knowledge to a student model
3/⬇️ Image
Read 6 tweets
8 May
AllenNLP @allen_ai = an Important Framework for NLU Researchers
Thread🧵👇
thesequence.substack.com/p/-edge22-mach…
❓AllenNLP:
+includes key building blocks for NLU
+offers state of the art NLU methods
+facilitates the work of researchers
thesequence.substack.com/p/-edge22-mach…
2/
AllenNLP is built on top of @PyTorch and designed with experimentation in mind

Key contribution = maintains implementations of new models:
+text generation,
+question answering,
+sentiment analysis
+& many others
3/
Read 5 tweets
7 May
Deep dive into "ZeRO: Memory Optimizations Toward Training Trillion Parameter Models" by Samyam Rajbhandari, Olatunji Ruwase, Yuxiong He & @jeffra45

It proposes an optimizer to build huge language pre-trained models.

Thread👇🏼 🔎
thesequence.substack.com/p/-edge22-mach…
Zero Redundancy Optimizer (ZeRO) is an optimization module that maximizes both memory and scaling efficiency.

2/
It tries to address the limitations of data parallelism and model parallelism while achieving the merits of both

thesequence.substack.com/p/-edge22-mach…

3/
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(