Excited to share our paper arxiv.org/abs/2105.12221 on neural net overparameterization to appear at #ICML2021 💃🏻We asked why can’t training find a minimum in mildly overparameterized nets. Below, a 4-4-4 net can achieve a zero-loss, but any of 5-5-5 nets trained with GD can not🤨
We investigated the training failures in mild overparameterization vs. successful training in vast overparameterization from a simple perspective of permutation symmetries!
The catch is that all critical points of small nets turn into subspaces of critical points in bigger nets. We gave precise numbers of such critical subspaces using combinatorics 😋
As a byproduct of this expansion trick, we could give a precise geometrical description of the global minima manifold in overparameterized nets: it is a union of affine subspaces that are connected like in the picture
Most surprisingly, in mildly overparameterized nets, the critical subspaces dominate the global minima! The landscape should be looking rough in this regime...
But, in the vast overparameterization regime, the global minima is much larger than the critical subspaces, therefore the minima manifold is HUGE — so it is easier to find a global minimum for GD as expected from the NTK theory!