Come to our talks and posters at #ICLR2021 to discuss our findings on understanding and improving deep learning! Talks and posters are available now! Links to the talks, posters, papers and codes in the thread:
Graduate degree in ML is overrated. So is having publications in top ML venues. One can accomplish a lot in this field without any of these. The truth is that you don’t need to cover a lot of background before you can do interesting things in ML.
2/10
The ML field continues to become more and more accessible everyday. Everything you need to learn is available online. There is a lot of push to make ML methods/models open-source and reproducible. Many people are also producing useful educational content.
✅ We introduce a measure of computational example difficulty: the prediction depth (PD). PD is the earliest layer after which the network’s final prediction is already determined.
✅ We use k-NN classifier probes to determine the prediction of each layer (left panel).
2/
✅ Prediction depth is higher for examples and datasets that seem more difficult (Fig. 1).
✅ PD is consistent across random seeds and similar architectures (Fig. 2).
Some people say that one shouldn't care about publication and the quality matters. However, the job market punishes those who don’t have publications in top ML venues. I empathize with students and newcomers to ML whose good papers are not getting accepted. #ICLR2021
1/
Long thread at the risk of being judged:
I just realized that in the last 6 years, 21 of my 24 papers have been accepted to top ML conf in their FIRST submission even though the majority of them were hastily-written borderline papers (not proud of this). How is this possible?
2/
At this point, I'm convinced that this cannot be explained by a combination of luck and quality of the papers. My belief is that the current system has lots of unnecessary and sometimes harmful biases which is #unfair to new comers and anyone who is outside of the "norm".
3/