PyTorch Profile picture
Dec 2 5 tweets 2 min read
We just introduced PyTorch 2.0 at the #PyTorchConference, introducing torch.compile!

Available in the nightlies today, stable release Early March 2023.

Read the full post: bit.ly/3VNysOA

🧵below!

1/5
PyTorch 2.0 introduces torch.compile, a compiled mode that accelerates your model without needing to change your model code. On 163 open-source models ranging across vision, NLP, and others, we found that using 2.0 speeds up training by 38-76%.

2/5
PyTorch 2.0 is *100%* backward-compatible.
The codebase is the same, the APIs are the same, and the way you write models is the same.
We are calling it 2.0 because it marks adding a significant new set of features.

3/5
New components in PyTorch 2.0.

- TorchDynamo generates FX Graphs from bytecode analysis
- AOTAutograd generates backward graphs ahead-of-time
- PrimTorch introduces a small operator set to make backends easier
- TorchInductor: a DL Compiler powered by OpenAI Triton

4/5
Keep an eye out for more news around the release of PyTorch 2.0 in the upcoming months. In the meantime, you can read more about the technology, benchmarks and how to get started here: bit.ly/3VNysOA

#PyTorchConference

5/5

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with PyTorch

PyTorch Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @PyTorch

Oct 25
On #TutorialTuesdays we revisit resources to power your PyTorch learning journey. This week’s beginner basics text based lesson is in two parts — Datasets & Dataloaders and Transforms. Read on for highlights of what you’ll learn: 🧵👇bit.ly/3D8cmyy

1/4
In PyTorch Datasets & Dataloaders Tutorial: Code for processing data samples needs to be organized. Dataset code should be decoupled from model training code for better readability & modularity. Learn in 5 steps.

2/4
In this PyTorch Datasets & Dataloaders Tutorial, learn about datasets:
1) Loading
2) Iterating & visualizing
3) Creating a custom dataset
4) Preparing your data for training with DataLoaders
5) Iterating through the DataLoader

Give it a try: bit.ly/3D8cmyy

3/4
Read 4 tweets
Nov 16, 2021
Get ready for PyTorch Developer Day on December 1-2, 2021! We’ve got an amazing lineup of speakers for you on Day 1.

And don’t forget to register for #PTD2 Day 2 taking place on Gather.Town: pytorchdeveloperday.fbreg.com/apply

Check the thread below to see the speakers ⬇️
1/10
🎙Keynote Speakers🎙
1. Lin Qiao - Engineering Director @MetaAI
2. @DougalMaclaurin - Sr. Research Scientist @Google
3. Philippe Tillet - Member of Technical Staff @OpenAI
4. @dwarak - Engineering Director @MetaAI
5. @dzhulgakov - Software Engineer @MetaAI

2/10
🎙Research🎙
1. Vitaly Fedyunin - Software Engineer @MetaAI
2. @mikeruberry - Software Engineer @MetaAI
3. Richard Zou + @cHHillee - Software Engineers @MetaAI
4. Yanli Zhao - Software Engineer @MetaAI

cont'd below

3/10
Read 10 tweets
Oct 25, 2021
ICYMI: PyTorch 1.10 was released last Thursday. Here are some highlights of the release.

Stay tuned for tweet threads in the next couple weeks delving deeper into these cool new features!

1/8
CUDA Graphs are now in beta, and allow you to capture (and replay!) static CUDA workloads without needing to relaunch kernels, leading to massive overhead reductions! Our integration allows for seamless interop between CUDA graphs and the rest of your model.

2/9
FX, an easy to use Python platform for writing Python-to-Python transforms of PyTorch programs, is now in stable. FX makes it easy to programmatically do things like fusing convolution w/ batch norm. Stay tuned for some FX examples of cool things that users have built!

3/9
Read 10 tweets
Oct 19, 2021
✨ Low Numerical Precision in PyTorch ✨
Most DL models are single-precision floats by default.
Lower numerical precision - while reasonably maintaining accuracy - reduces:

a) model size
b) memory required
c) power consumed

Thread about lower precision DL in PyTorch ->
1/11
Lower precision speeds up :

* compute-bound operations, by reducing load on the hardware

* memory bandwidth-bound operations, by accessing smaller data

In many deep models, memory access dominates power consumption; reducing memory I/O makes models more energy efficient.

2/11
3 lower precision datatypes are typically used in PyTorch:

* FP16 or half-precision (`torch. float16`)

* BF16 (`torch. bfloat16`)

* INT8 (`torch.quint8` and `torch. qint8`) which stores floats in a quantized format

3/11
Read 11 tweets
Sep 14, 2021
Want to make your inference code in PyTorch run faster? Here’s a quick thread on doing exactly that.

1. Replace torch.no_grad() with the ✨torch.inference_mode()✨ context manager.
2. ⏩ inference_mode() is torch.no_grad() on steroids

While NoGrad excludes operations from being tracked by Autograd, InferenceMode takes that two steps ahead, potentially speeding up your code (YMMV depending on model complexity and hardware)
3. ⏩ InferenceMode reduces overheads by disabling two Autograd mechanisms - version counting and metadata tracking - on all tensors created here ("inference tensors").

Disabled mechanisms mean inference tensors have some restrictions in how they can be used 👇
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(