What actually happens when you call .backwards() in @PyTorch?
Autograd goodness 🪄!
PyTorch keeps track of all of the computations you’ve done on each of your tensors and .backwards() triggers it to compute the gradients and stores them in .grad.
1/3
You can see the gradient functions by looking at .grad_fn of your tensors after each computation.
You can see the entire graph by looking at .next_functions recursively.
Hyperparameter Search for @huggingface transformers models 🐝🤗
For the @weights_biases blogathon, @matteopilotto24 created this blog post showcasing how to run hyperparameter sweeps on HF transformers models using W&B Sweeps.
The plot above shows how each of your experiments performed on the task.
It shows the relationship of the different hyperparameters to the metric you care about.
W&B Sweeps automatically generates this plot as well as this parameter importance plot below: 2/7
First, he adds the W&B HF integration by just logging into his account and setting some environment variables to tell HF to use W&B to track his experiments
Before each of these livecoding sessions, I was tasked with watching the associated Math4ML lesson. I then joined @charles_irl and worked through the autograded exercise notebooks with him.
This was a humbling experience and a lot of fun. Charles has clearly crafted these lessons with a lot of love over many years so it was a real joy to help in a small way. He’s also a great teacher 👨🏼🏫 (sorry @charles_irl, there’s no green-haired teacher emoji).
What is data lineage and why is it important when building ML systems?
From @chipro’s new book, Designing Machine Learning Systems: 1/5
Data lineage is the process of keeping track of the origin of your data and tracking versions of it over time.
This is important if your data changes and you want to keep track of which model was trained using which data and how the model performance is affected.
2/5
You could track data versions yourself but it'll likely be as error prone as "model_latest_latest_actual_latest_2021.pth" is when tracking models.
@weights_biases Artifacts is one way you can track the data you used to train your models with a few lines of code. 3/5
The model in this paper learns to associate one or more objects to the effects they have on their environment (shadows, reflections, etc.) for a given video and rough segmentation masks of each object. This enables video effects like "background replacement". 2/5
and "color pop" and a "stroboscopic" effect (in the next tweet): 3/5