In a new paper from @wightmanr et al. a traditional ResNet-50 is re-trained using a modern training protocol. It achieves a very competitive 80.4% top-1 accuracy on ImageNet without using extra data or distillation.
The paper catalogues the exact training settings to provide a robust baseline for future experiments:
It also records training costs and inference times on ImageNet classification between other architectures trained with the proposed ResNet-50 optimized training procedure:
It evaluates transfer learning performance using different procedures on a number of standard benchmarks. One of the proposed procedures (A1) leads to the best overall performance on the downstream tasks:
🚨 Newsletter Issue #3. Featuring a new state-of-the-art on ImageNet, a trillion-parameter language model, 10 applications of transformers you didn’t know about, and much more! Read on below:
⏪ Papers with Code: Year in Review. We’re ending the year by taking a look back at the top trending papers, libraries and benchmarks for 2020. Read on below!