There's been some back-and-forth about this paper on getting gradients without doing backpropagation, so I took a minute to write up an analysis on what breaks and how it might be fixed.
tl;dr: the estimated gradients are _really_ noisy! like wow
The main result I claim is an extension of Thm 1 in the paper. They prove that the _expected value_ of the gradient estimate is the true gradient, and I worked out the _variance_ of the estimate.
It's big! Each entry has variance equal to the entire true gradient's norm😬
(Sketch of the proof: nothing is correlated, everything has 0 mean and is symmetric around the origin, the only relevant terms are chi-squared r.v.s with known variances that get scaled by the gradient norms. gaussians are fun!)
Informally, we say that "noisy gradients" are bad and slow down learning.
So I looked at the "signal to noise ratio" between the true gradient value and the variance of the estimate.
It's bad! If you're scaling your gradients properly, it gets worse as you add parameters.
(FYI, I sanity-checked my result by pulling gradients from a PyTorch MNIST example and checking the true gradient's norm against the average variance of each entry, which should be equal. And they were super close!)
I give some intuitions for the variance, and for the general distribution of the forward gradients (g), based on product distributions and large random vectors.
In that paragraph I mention some simulations (related to the sanity check above). I didn't include the plots, but here they are! The alignment between the forward grad and the true gradient is all over the place -- and way worse than randomness from minibatch effects.
More could've been said about the weaknesses of FG in the paper, but I don't think it's a useless idea.
So I wrote some suggestions. For example, if you already have a good prior about the gradient direction, maybe you could sample from it instead of a unit normal?
@theshawwn i saw you expressing interest in the forward gradient stuff and reasonable skepticism about the value of MNIST experiments
this is a fairly rigorous argument that the gradient noise is too high for fwd grads, as is, to work in large models
Read through these awesome notes by @chipro and noticed something interesting about distribution shifts: they form a lattice, so you can represent them like you do sets, ie using a Venn diagram!
I find this view super helpful for understanding shifts, so let's walk through it.
(inb4 pedantry: the above diagram is an Euler diagram, not a Venn diagram, meaning not all possible joins are represented. that is good, actually, for reasons to be revealed!)
From the notes: joint distribution of data X and targets Y is shifting. We can decompose the joint into two pieces (marginal and conditional) in two separate ways (from Y or X).
There are four major classes of distribution shift, defined by which pieces vary and which don't.
the final video for the @weights_biases Math4ML series, on probability, is now up on YouTube!
@_ScottCondron and I talk entropies, divergence, and loss functions
🔗:
this is the final video in a four-part series of "exercise" videos, where Scott and I work through a collection of Jupyter notebooks with automatically-graded Python coding exercises on math concepts
New video series out this week (and into next!) on the @weights_biases YouTube channel.
They're Socratic livecoding sessions where @_ScottCondron and I work through the exercise notebooks for the Math4ML class.
Details in 🧵⤵️
Socratic: following an ancient academic tradition, I try to trick @_ScottCondron into being wrong, so that students can learn from mistakes and see their learning process reflected in the content.
(i was inspired to try this style out by the @PyTorchLightnin Master Class series, in which @_willfalcon and @alfcnz talk nitty-gritty of DL with PyTorch+Lightning while writing code. strong recommend!)
tl;dr: the basic idea of the SVD works for _any_ function.
it's a three step decomposition:
- throw away the useless bits ⤵
- rename what remains 🔀
- insert yourself into the right context ⤴
also, if you're more of a "YouTube talk" than a "tweet wall" kinda person, check out the video version, given as part of the @weights_biases Deep Learning Salon webinar series