This architecture has superseded RNNs for NLP tasks, and is likely to do the same to CNNs for vision tasks.
PyTorch provides Transformer modules since 1.2, but the docs are lacking:
- No explanation of inference
- Tutorial is encoder-only
3/N Our notebook shows both. Let's get started with simple data.
Our output will be number sequences like [2, 5, 3].
Our input will be the same as output, but with each element repeated twice, e.g. [2, 2, 5, 5, 3, 3]
We start each sequence with 0 and end each sequence with 1.
4/N We do the simplest possible thing to wrap this data with a PyTorch DataLoader, which will handle batching, shuffling, and pre-fetching.
5/N We now define our Transformer, making use of built-in PyTorch modules
6/N The forward() method encodes the input, and then decodes the input and the output together, where the output is partially masked to prevent "peeking" forward.
7/N With some PyTorch-Lightning boilerplate, we're ready to train on any number of GPUs/TPUs.
Note the "teacher-forcing", where the ground truth is fed into the model shifted by one character.
Training on this toy data finishes quickly with 100% validation accuracy.
8/N To calculate accuracy, we need to implement greedy decoding.
This is where the input is used to generate output tokens one at a time. In our case, we use greedy selection, but beam search can be used instead for a potential accuracy boost.
9/N And that's all there is to it!
Hope the notebook is useful.
If you want more, check out official docs, a helpful post from ScaleAI, and a great explanation of the Transformer architecture:
10/N Lastly, our Berkeley course is beginning next Tuesday! Remember to sign up to receive updates as we release lectures (we will do so with a delay): forms.gle/235LpvXmeCN21j…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Let's talk about setting up our Python/CUDA environment!
Our goals:
- Easily specify exact Python and CUDA versions
- Humans should not be responsible for finding mutually-compatible package versions
- Production and dev requirements should be separate
1/N
Here's a good way to achieve these goals:
- Use `conda` to install Python/CUDA as specified in `environment.yml`
- Use `pip-tools` to lock in mutually compatbile versions from `requirements/prod.in` and `requirements/dev.in`
dagster describes themselves as a "data orchestrator for machine learning, analytics, and ETL"
Let's break that down 👇
2/ When you work with real-world data, your pipelines can get complex.
E.g., to train a language model on twitter, you might:
- Download data
- Strip out offensive tweets
- Preprocess the data
- Fit models
- Summarize training performance
- Deploy the best model to production
3/ In production settings, pipelines can be even more complicated.
All well and good, but doing those steps manually every time you update your model is painful, resource intensive, and hard to scale.
And what happens if you have hundreds of these pipelines you need to manage?
@DeepnoteHQ is an epic Jupyter notebook alternative:
- Improved UX
- Real-time collaboration (editing and discussion)
- Direct connections to your data stores, including Postgres, S3, and BigQuery
- Effortless sharing of your running notebook
👇
One major con: Deepnote does not yet support GPU compute.
For data scientists who don't need to train deep learning models, Deepnote is a great tool to check out. It improves your developer experience and allows effortless sharing of your work with your teammates and manager.
While the Deepnote team is working on adding GPU support, there's another Jupyter-like cloud notebook you can use for deep learning: @GoogleColab.
If you use it, we recommend signing up for their $10/month Pro plan for priority access to TPUs, longer runtimes, and more RAM.