Weights & Biases Profile picture
The developer-first MLOps platform, built for practitioners by practitioners.
Feb 10, 2022 β€’ 5 tweets β€’ 4 min read
πŸ’¬ The world of GPT-3

In this episode of Gradient Dissent, @npew, @borisdayma, and @l2k talk about:

✏️ Applying GPT-3 to translation and copywriting
πŸ’ͺ The performance benefits of fine-tuning GPT-3
πŸ’» Developing the @OpenAI API

(Bonus topic: The story behind the @OpenAI and @weights_biases collaboration πŸͺ„πŸ )

Dec 4, 2021 β€’ 5 tweets β€’ 3 min read
πŸ† πŸ₯³πŸ•ΊπŸ½ We're thrilled to announce the first winners of #27DaysOfJAX

πŸ… @rareblog, for running JAX on a Jetson Nano & writing about Spiking NNs in JAX

πŸ… @imPGulati, for writing his first blog ever & contributing

πŸ… @dionhaefner, for 🀯 🀯 🀯 the great read!

Blogs in theπŸ§΅πŸ‘‡
Sep 10, 2021 β€’ 11 tweets β€’ 5 min read
We're grateful that deeply understanding and improving datasets is acknowledged as a non-negotiable part of training pipelines πŸ™

However exploring datasets can be cumbersome, and documenting and sharing findings is often messy

W&B Tables fixes this πŸ‘‡

W&B Tables enables quick and powerful exploration of image, video, audio, tabular, molecule and NLP datasets.

@metaphdor used Tables to explore 100k rows from @Reddit's Go Emotions dataset:

πŸ“Ί: )

πŸ›‘ First, filtering for multiple column values
Sep 9, 2021 β€’ 5 tweets β€’ 3 min read
New podcast episode! πŸ“’

@l2k and @emilymbender dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and the #BenderRule.

πŸŽ₯:

They discuss 4 of Emily's papers ⬇️

1/5 "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" (Bender, Gebru et al. 2021)

Possible risks associated with bigger and bigger language models, and ways to mitigate those risks.

dl.acm.org/doi/pdf/10.114…

2/5
Nov 5, 2020 β€’ 7 tweets β€’ 4 min read
We're thrilled to announce that YOLOv5 now comes with @weights_biases baked in!

With no additional lines of code, you now get automatic bounding box debugging, GPU usage and performance metrics, reproducible models, & more!

πŸ‘©β€πŸš€ Try it β†’ colab.research.google.com/github/ultraly…

#deeplearning What does YOLOv5 + W&B give you?

1. You can monitor how your models and hyperparameters are performing, including automatically tracking:

- Training and validation losses
- Precision, Recall, mAP@0.5, mAP@0.5:0.95
- Learning Rate over time