New video series out this week (and into next!) on the @weights_biases YouTube channel.
They're Socratic livecoding sessions where @_ScottCondron and I work through the exercise notebooks for the Math4ML class.
Details in 🧵⤵️
Socratic: following an ancient academic tradition, I try to trick @_ScottCondron into being wrong, so that students can learn from mistakes and see their learning process reflected in the content.
(i was inspired to try this style out by the @PyTorchLightnin Master Class series, in which @_willfalcon and @alfcnz talk nitty-gritty of DL with PyTorch+Lightning while writing code. strong recommend!)
Math4ML: in the class, we cover core ideas from linear algebra, calculus, and probability that are useful in ML.
I try to emphasize the key intuitions and connect them to programming ideas: shapes are like types, big O-notation and limits, etc.
Exercise notebooks: the M4ML class has always included GitHub-backed Colab/binder notebooks with text and exercises that firmed up the ideas in the lectures, but there wasn't any public video content explaining how to use them. Until now!
Livecoding: the exercises are code, and we write the solutions together live (with light editing to remove typos etc)
Because the exercises are code, they can be graded programmatically.
Essentially, each comes with unit tests that you have to pass. Failures generate hints!
This course material is designed for remote, asynchronous online education.
The combination of video lectures, recorded homework sessions, and self-grading exercises is meant to make it possible to get the full benefit of the course asynchronously via the internet.
And if you have questions that the videos and autograder can't answer, you can post about them on the YouTube channel or in the W&B forum: wandb.me/and-you
The first video will be out tomorrow! I hope to see you there.
I think programming GPUs is too hard. Part of the problem is sprawling, scattered documentation & best practices.
Over the past few months, we’ve been working to solve that problem, putting together a “Rosetta Stone” GPU Glossary.
And now it’s live!
My take-aways in thread.
The heart of the CUDA stack, IMO, is not anything named CUDA: it’s the humble Parallel Thread eXecution instruction set architecture, the compilation target of the CUDA compiler and the only stable interface to GPU hardware.
This is obvious in hindsight. The ISA is where machines make contact with programs and it fundamentally divides the responsibilities of the hardware engineers and software engineers. This is true in a way even for a virtual ISA like PTX.
Last week @brad19brown, @jordanjuravsky, & co-authors released a paper on inference-time scaling laws that enable small LMs to beat the big boys.
So this weekend, @HowardHalim & I dropped everything to run their analysis on a new model + new data.
Success 😎
Why this matters:
Details of our work and repro code on the Modal blog.
All you need are @modal_labs and @huggingface credentials! And it's free: it fits in the $30/month in Modal's free tier.modal.com/blog/llama-hum…
First: we are bad at using language models.
They are statistical models of Unicode sequences. We know that sequential sampling is hard, but (driven by the economics of inference service providers) we ignore that when sampling from LMs and sample a single sequence greedily.
I had a delightful session talking through the paper "In-Context Learning and Induction Heads" with author @NeelNanda5.
It's part of a long research thread, one of my favorites over the last five years, on "reverse engineering" DNNs.
The core claim of the paper is that a large fraction of the in-context learning behavior that makes contemporary transformer LLMs so effective comes from a surprisingly simple type of circuit they call an _induction head_.
In the video, Neel and I talk through the context of this claim and some of the phenomenological evidence for it.
In the process, I was delighted to discover that we share a deep love for and perspective informed by the natural sciences.
I recently presented a series of four reports over 40 years on system failure, ranging from a 1985 typewritten white paper on mainframe database crashes to a 2021 Zoom talk on outages in one of Google's ML-based ranking systems.
Here's a summary, with connections to reliable ML.
Each report was a post-hoc meta-analysis of post-mortem analyses: which "root causes" come up most often? Which take the most time to resolve?
Each captures 100 or more outages from a system using best practices of its era & modality at the largest scale.
"Why Do Computers Stop" was the first in the series, by Jim Gray (standing, center), who pioneered transactional databases and the ACID principle in 80s.
It's clear that these ideas were informed by his close engagement with actual failure data.