On #TutorialTuesdays we revisit resources to power your PyTorch learning journey. This week’s beginner basics text based lesson is in two parts — Datasets & Dataloaders and Transforms. Read on for highlights of what you’ll learn: 🧵👇bit.ly/3D8cmyy
1/4
In PyTorch Datasets & Dataloaders Tutorial: Code for processing data samples needs to be organized. Dataset code should be decoupled from model training code for better readability & modularity. Learn in 5 steps.
2/4
In this PyTorch Datasets & Dataloaders Tutorial, learn about datasets: 1) Loading 2) Iterating & visualizing 3) Creating a custom dataset 4) Preparing your data for training with DataLoaders 5) Iterating through the DataLoader
In PyTorch Transforms Tutorial: Data doesn’t always come in its final processed form that’s required to train machine learning algorithms. Use transforms to perform some manipulation of the data and make it suitable for training. Learn how: bit.ly/3gz9YJz
4/4
• • •
Missing some Tweet in this thread? You can try to
force a refresh
ICYMI: PyTorch 1.10 was released last Thursday. Here are some highlights of the release.
Stay tuned for tweet threads in the next couple weeks delving deeper into these cool new features!
1/8
CUDA Graphs are now in beta, and allow you to capture (and replay!) static CUDA workloads without needing to relaunch kernels, leading to massive overhead reductions! Our integration allows for seamless interop between CUDA graphs and the rest of your model.
2/9
FX, an easy to use Python platform for writing Python-to-Python transforms of PyTorch programs, is now in stable. FX makes it easy to programmatically do things like fusing convolution w/ batch norm. Stay tuned for some FX examples of cool things that users have built!
3/9
✨ Low Numerical Precision in PyTorch ✨
Most DL models are single-precision floats by default.
Lower numerical precision - while reasonably maintaining accuracy - reduces:
a) model size
b) memory required
c) power consumed
Thread about lower precision DL in PyTorch ->
1/11
Lower precision speeds up :
* compute-bound operations, by reducing load on the hardware
* memory bandwidth-bound operations, by accessing smaller data
In many deep models, memory access dominates power consumption; reducing memory I/O makes models more energy efficient.
2/11
3 lower precision datatypes are typically used in PyTorch:
* FP16 or half-precision (`torch. float16`)
* BF16 (`torch. bfloat16`)
* INT8 (`torch.quint8` and `torch. qint8`) which stores floats in a quantized format
Want to make your inference code in PyTorch run faster? Here’s a quick thread on doing exactly that.
1. Replace torch.no_grad() with the ✨torch.inference_mode()✨ context manager.
2. ⏩ inference_mode() is torch.no_grad() on steroids
While NoGrad excludes operations from being tracked by Autograd, InferenceMode takes that two steps ahead, potentially speeding up your code (YMMV depending on model complexity and hardware)
3. ⏩ InferenceMode reduces overheads by disabling two Autograd mechanisms - version counting and metadata tracking - on all tensors created here ("inference tensors").
Disabled mechanisms mean inference tensors have some restrictions in how they can be used 👇