🛠️FSDL Tooling Tuesday🛠️

@DVCorg is one of the fastest growing ML experiment management tools.

The main idea of DVC is to *track ML experiments in git*

Everything is versioned -- the code, the data, the model, and the metrics created by your experiment. Pretty powerful!
The magic of DVC is that it supports datasets and models too large to store in github.

And since every part of your experiment is versioned, you can easily roll back to an earlier run and reproduce it.

No more fiddling around to recreate that experiment from two weeks ago!
What are the tradeoffs? (1/2)

*DVC does a lot*

Versioning data, experiment tracking, and running pipelines. You might prefer lighter weight tools (e.g., replicate.ai) for any one of these
What are the tradeoffs? (2/3)

*DVC imposes a workflow*

Each experiment is like a commit that you make by running your script through `dvc run`. Other tools like @weights_biases integrate into how you do things now
What are the tradeoffs? (3/3)

* DVC versions your data, but the diffs are limited*

@pachyderminc focuses on versioning your entire data pipeline, and @DoltHub versions your dataset more granularly at the row level
For many these are easy tradeoffs for reproducible ML experiments out of the box!

What other tools do you like for experiment tracking, reproducibility, and data versioning?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Full Stack Deep Learning

Full Stack Deep Learning Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @full_stack_dl

11 Dec
1/ @lishali88 and @spring_stream joined us to talk about building Rosebud.ai.

Rosebud.ai's @tokkingheads turns portraits into animated avatars that read text you provide. It's fun to play around with!

Here are some challenges they faced building it:
2/ A scalable model training platform was key to experimenting quickly enough to build talkingheads.rosebud.ai.

They built theirs on Kubernetes and take advantage of spot instances to keep costs down.

More on their training infra here: blog.rosebud.ai/cost-efficient…
3/ Model quality is key to their product, so Rosebud prioritizes that over performance.

They're looking into model compression techniques to make big models faster (and more cost effective).
Read 9 tweets
1 Dec
🛠FSDL Tooling Tuesday🛠

@DeepnoteHQ is an epic Jupyter notebook alternative:

- Improved UX
- Real-time collaboration (editing and discussion)
- Direct connections to your data stores, including Postgres, S3, and BigQuery
- Effortless sharing of your running notebook

👇 Image
One major con: Deepnote does not yet support GPU compute.

For data scientists who don't need to train deep learning models, Deepnote is a great tool to check out. It improves your developer experience and allows effortless sharing of your work with your teammates and manager.
While the Deepnote team is working on adding GPU support, there's another Jupyter-like cloud notebook you can use for deep learning: @GoogleColab.

If you use it, we recommend signing up for their $10/month Pro plan for priority access to TPUs, longer runtimes, and more RAM.
Read 5 tweets
19 Nov
1/ FSDL helps you turn ML experiments into shipped products with real-world impact.

This Spring, @josh_tobin_ @sergeykarayev & @pabbeel are teaching an improved version as an official Berkeley course: bit.ly/berkeleyfsdl

Want to follow along as we post lectures publicly?👇
2/ Sign up to receive updates on our lectures as they're released (and to optionally participate in a synchronous learning community): forms.gle/zqE2rjkfqex2AQ…
3/ We cover the full stack, from project management to MLOps:

- Formulating the problem and estimating cost
- Managing, labeling, and processing data
- Making the right HW and SW choices
- Troubleshooting and reproducing training
- Deploying the model at scale
Read 5 tweets
17 Nov
1/ Last week's Production ML meetup featured Peter Gao and Princeton Kwong, former Engineering Managers at Cruise and Aquabyte. Below, their insights on data quality and its downstream effects for computer vision use cases:
2/ What is your experience with data quality?

- Cruise: (1) poorly-labeled data confuses the model; (2) models may perform poorly on edge case objects
3/ Data quality cont'd

- Aquabyte: No public datasets to work with. Our engineers went onsite to collect ground-truth data, built a huge labeling pipeline to get the data to human labelers, and designed our own labeling interface that enabled labelers to properly label fishes.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!