During this quarantine time, I binge-watched @Stanford #CS330 lectures taught by the brilliant @chelseabfinn. This blog post is a summary of the key takeaways on #Bayesian Meta-Learning that I’ve learned. #AtHomeWithAI
medium.com/cracking-the-d…
(1/7) 👇
(2/7) 👇
- @drfeifei, @rob_fergus, and @PietroPerona: vision.stanford.edu/documents/Fei-…
- @rsalakhu, @wise_tbaum, and Torralba: proceedings.mlr.press/v27/salakhutdi…
- @LakeBrenden, @rsalakhu, and @wise_tbaum: papers.nips.cc/paper/5128-one…
(3/7) 👇
1> Use latent variable models optimized with variational inference
2> Build ensembles to estimate model uncertainty
3> Represent an explicit distribution over weights of model params
4> Use normalizing flows
5> Use energy-based models
(4/7) 👇
- Towards a Neural Statistician by @InfAtEd: arxiv.org/pdf/1606.02185…
- Conditional Neural Processes by @DeepMind: arxiv.org/pdf/1807.01613…
- ML-PIP by @Cambridge_Uni: arxiv.org/pdf/1805.09921…
(5/7) 👇
- Amortized BMAML by @sachin_ravi_ and @alexbeatson: openreview.net/pdf?id=rkgpy3C…
- BMAML by @element_ai and @MILAMontreal: arxiv.org/pdf/1806.03836…
- Probabilistic MAML by @chelseabfinn, Xu, @svlevine: arxiv.org/pdf/1806.02817…
(6/7) 👇
youtube.com/playlist?list=…
(7/7) 📺