🛠️ tooling tuesday 🛠️

In honor of our first lecture at Berkeley this evening, here's our remote teaching stack:
1/ @zoom_us. Duh.

One thing that makes it better is to have a good audio/video setup. Here's a good guide:

Fujifilm cameras work too and avoid the need for the Camlink.
2/ @SlackHQ for question management.

Zoom chat is unthreaded and hard to react to. Asking questions live is chaos. Instead, students post questions in a slack channel.

Instructors can answer them directly in slack, or summarize and answer aloud at a break in the lecture.
3/ @loom for lecture recording. Super simple, no uploads needed, and puts a nice face bubble in the corner.
4/ @GoogleColab for environment management. It's amazing how much work it saves to have developer environments out of the box, and the GPU support is essential for a deep learning class.
5/ @gradescope for assignment grading. Students can view and submit their work remotely, and it's way easier for us to grade via gradescope than, e.g., by pdf. We can even automate big chunks of it.
6/ What does your stack look like? Anything else we should take a look at?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Full Stack Deep Learning

Full Stack Deep Learning Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @full_stack_dl

13 Jan
🛠️Tooling Tuesday🛠️

Today, we share a @GoogleColab notebook implementing a Transformer with @PyTorch, trained using @PyTorchLightnin.

We show both encoder and decoder, train with teacher forcing, and implement greedy decoding for inference.

colab.research.google.com/drive/1swXWW5s…

👇1/N
2/N Transformers are a game changer.

This architecture has superseded RNNs for NLP tasks, and is likely to do the same to CNNs for vision tasks.

PyTorch provides Transformer modules since 1.2, but the docs are lacking:

- No explanation of inference
- Tutorial is encoder-only
3/N Our notebook shows both. Let's get started with simple data.

Our output will be number sequences like [2, 5, 3].

Our input will be the same as output, but with each element repeated twice, e.g. [2, 2, 5, 5, 3, 3]

We start each sequence with 0 and end each sequence with 1.
Read 10 tweets
5 Jan
🛠️Tooling Tuesday🛠️

Let's talk about setting up our Python/CUDA environment!

Our goals:

- Easily specify exact Python and CUDA versions
- Humans should not be responsible for finding mutually-compatible package versions
- Production and dev requirements should be separate

1/N
Here's a good way to achieve these goals:

- Use `conda` to install Python/CUDA as specified in `environment.yml`

- Use `pip-tools` to lock in mutually compatbile versions from `requirements/prod.in` and `requirements/dev.in`

- Simply run `make` to update everything!

2/N
Here's our `environment.yml` file.

It specifies Python 3.8, CUDA 10.2, CUDNN 7.6.

To create an environment from this, install Miniconda (docs.conda.io/en/latest/mini…) and run `conda env create`.

Activate the environment with `conda activate conda-piptools-sample-project`

3/N
Read 7 tweets
29 Dec 20
🛠️Tooling Tuesdays: Thread of Threads🛠️

Every week, we share a useful tool for full stack machine learning. Follow along, and please share your suggestions!

1/N
Read 5 tweets
23 Dec 20
🛠️ Tooling Tuesday 🛠️

This week: @dagsterio (dagster.io)

dagster describes themselves as a "data orchestrator for machine learning, analytics, and ETL"

Let's break that down 👇
2/ When you work with real-world data, your pipelines can get complex.

E.g., to train a language model on twitter, you might:
- Download data
- Strip out offensive tweets
- Preprocess the data
- Fit models
- Summarize training performance
- Deploy the best model to production
3/ In production settings, pipelines can be even more complicated.

All well and good, but doing those steps manually every time you update your model is painful, resource intensive, and hard to scale.

And what happens if you have hundreds of these pipelines you need to manage?
Read 13 tweets
11 Dec 20
1/ @lishali88 and @spring_stream joined us to talk about building Rosebud.ai.

Rosebud.ai's @tokkingheads turns portraits into animated avatars that read text you provide. It's fun to play around with!

Here are some challenges they faced building it:
2/ A scalable model training platform was key to experimenting quickly enough to build talkingheads.rosebud.ai.

They built theirs on Kubernetes and take advantage of spot instances to keep costs down.

More on their training infra here: blog.rosebud.ai/cost-efficient…
3/ Model quality is key to their product, so Rosebud prioritizes that over performance.

They're looking into model compression techniques to make big models faster (and more cost effective).
Read 9 tweets
9 Dec 20
🛠️FSDL Tooling Tuesday🛠️

@DVCorg is one of the fastest growing ML experiment management tools.

The main idea of DVC is to *track ML experiments in git*

Everything is versioned -- the code, the data, the model, and the metrics created by your experiment. Pretty powerful!
The magic of DVC is that it supports datasets and models too large to store in github.

And since every part of your experiment is versioned, you can easily roll back to an earlier run and reproduce it.

No more fiddling around to recreate that experiment from two weeks ago!
What are the tradeoffs? (1/2)

*DVC does a lot*

Versioning data, experiment tracking, and running pipelines. You might prefer lighter weight tools (e.g., replicate.ai) for any one of these
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!