Research Scientist at DeepMind. Opinions my own. Inventor of GANs. Lead author of https://t.co/M6vl8pEifa
3 subscribers
Nov 24 • 9 tweets • 2 min read
Posting a call for help: does anyone know of a good way to simultaneously treat both POTS and Ménière’s disease? Please contact me if you’re either a clinician with experience doing this or a patient who has found a good solution. Context in thread
In early 2023 I started losing hearing suddenly and got the best help by posting on Twitter, so I’m trying that strategy again
Aug 25, 2018 • 6 tweets • 1 min read
I don’t like notebooks either. I agree with a lot of @joelgrus ‘s reasons and have a few others of my own
I suspect that peer review *actually causes* rather than mitigates many of the “troubling trends” recently identified by @zacharylipton and Jacob Steinhardt: arxiv.org/abs/1807.03341
I frequently serve as an area chair and I manage a small research group, so overall I see a lot of reviews of both my group’s work and others’ work
May 16, 2018 • 9 tweets • 2 min read
A math trick I like a lot is the approach to taking derivatives using hyperreal numbers. Thread:
For this trick we introduce a new kind of number, called an infinitesimal hyperreal number. Imagine we have some number epsilon, such that epsilon > 0 but epsilon < x for all positive real numbers x. Imagine that we can use algebra to manipulate epsilon like any other variable.
May 15, 2018 • 10 tweets • 2 min read
A quick thread on two of my favorite theory hacks for machine learning research
A lot of the time, we want to analyze the optimal behavior of a neural net using algebra / calculus. Neural net models are usually too complicated for you to algebraically solve for the parameters that optimize most functions (unless it's some trivial function like weight decay)
Mar 26, 2018 • 12 tweets • 2 min read
2nd thread on evaluating GAN papers (1st thread hit max thread length)
Many DL algorithms, but especially GANs and RL, get very different results each time you run them. Papers should show at least 3 runs with the same hyperparameters to get some idea of the stochasticity.
Mar 26, 2018 • 25 tweets • 4 min read
Thread on how to review papers about generic improvements to GANs
There are a lot of papers about theoretical or empirical studies of how GANs work, papers about how to do new strange and interesting things with GANs (e.g. the first papers on unsupervised translation), new metrics, etc. This thread isn't about those.
Mar 26, 2018 • 11 tweets • 2 min read
1/11) Thread on bidding to review conference papers
2/11) Lately I've seen a lot of people saying things like "clearly good papers get all the senior reviewers" or "remember to bid or you'll only get low quality papers"