Blog: mitchellnw.github.io/blog/2019/dnw/
Preprint: arxiv.org/abs/1906.00586
Code: github.com/allenai/dnw
(1/4)
1) In _some_ ways, the problem of NAS and sparse neural network learning are really two sides of the same coin. As NAS becomes more fine grained, finding a good architecture is akin to finding a sparse subnetwork of the complete graph.
(2/4)
(3/n)
(4/n)
(5/n)
As the AI/ML research space grows increasingly competitive, the importance of programs where prior research is not a pre-requisite must be emphasized!
(6/6)