@l2k and @emilymbender dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and the #BenderRule.
🎥:
They discuss 4 of Emily's papers ⬇️
1/5
"On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" (Bender, Gebru et al. 2021)
Possible risks associated with bigger and bigger language models, and ways to mitigate those risks.
1. You can monitor how your models and hyperparameters are performing, including automatically tracking:
- Training and validation losses
- Precision, Recall, mAP@0.5, mAP@0.5:0.95
- Learning Rate over time
2. Automatically tracked system metrics like GPU Type, GPU Utilization, power, temperature, CUDA memory usage; and system metrics like Disk I/0, CPU utilization, RAM memory usage.