My Authors
Read all threads
What's the difference between recent works in semi-supervised learning? Key: consistency training (UDA, FixMatch, ReMixMatch, ICT, etc.) and self-training (#NoisyStudent), see photo below and my recent talk (bit.ly/thangluong-tal… with bonus slides on #MeenaBot!).
Our UDA work (arxiv.org/abs/1904.12848) proposes the use of strong augmentation (RandAugment) which subsequent works (FixMatch, NoisyStudent) follow. UDA uses soft pseudo-labels whereas FixMatch uses hard ones after "weak" augmentation in consistency training.
It is also important to note that adding noise to the student, training equal-or-larger students, and iterative self-training is a novel combination that defines the success of #NoisyStudent on ImageNet (arxiv.org/abs/1911.04252, to appear in #CVPR).
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Thang Luong

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!