Weights & Biases Profile picture
Sep 10, 2021 β€’ 11 tweets β€’ 5 min read β€’ Read on X
We're grateful that deeply understanding and improving datasets is acknowledged as a non-negotiable part of training pipelines πŸ™

However exploring datasets can be cumbersome, and documenting and sharing findings is often messy

W&B Tables fixes this πŸ‘‡

W&B Tables enables quick and powerful exploration of image, video, audio, tabular, molecule and NLP datasets.

@metaphdor used Tables to explore 100k rows from @Reddit's Go Emotions dataset:

πŸ“Ί: )

πŸ›‘ First, filtering for multiple column values
πŸ”Ž Exploring the distribution of reddit comments by sub-reddit name:
πŸͺ„ Creating additional calculated columns; here we get the count of comments per sub-reddit. Looks like the "farcry" sub has the fewest comments:
πŸ€— We can find which sub had the highest fraction of "caring" comments:
And which sub had the highest ratio of gratitude πŸ™ to excitement πŸ₯³ (i.e. thankful but maybe kinda boring) - sorry r/legaladvice 😐 :
✍️ Documenting and sharing these findings with collaborators is a breeze by sharing them W&B Reports.

Your collaborators can also start their own exploration in Tables that you've added to a Report ** in the Report UI itself ** and persist these changes between visits.
πŸ’» Logging to W&B Tables is super easy, here we downloaded the Go Emotions dataset from the @huggingface Datasets library and logged it as a pandas dataframe
To log to W&B Tables and start your own exploration, you can run this colab:

πŸƒβ€β™€οΈ wandb.me/go-emotions-co…
πŸ–ΌοΈ This is only 1 example for NLP; Tables supports exploration of a wide variety of data types, here @sbxrobotics used Tables to demonstrate how to evaluate image segmentation modes:

wandb.ai/artem_sbx/sbx-…
Finally, you can get started with our W&B Tables docs here:

πŸ“™ docs.wandb.ai/guides/data-vis

We're incredibly excited about Tables and will be continuously improving functionality and performance over the coming months. We'd love to know what you think: support@wandb.com

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Weights & Biases

Weights & Biases Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @weights_biases

Feb 10, 2022
πŸ’¬ The world of GPT-3

In this episode of Gradient Dissent, @npew, @borisdayma, and @l2k talk about:

✏️ Applying GPT-3 to translation and copywriting
πŸ’ͺ The performance benefits of fine-tuning GPT-3
πŸ’» Developing the @OpenAI API

(Bonus topic: The story behind the @OpenAI and @weights_biases collaboration πŸͺ„πŸ )

⏳ Timestamps (1/2)

0:00 Intro
1:01 Solving real-world problems with GPT-3
6:57 Applying GPT-3 to translation tasks
14:58 Copywriting and other commercial GPT-3 applications
20:22 The OpenAI API and fine-tuning GPT-3

player.captivate.fm/episode/e56f1a…
Read 5 tweets
Dec 4, 2021
πŸ† πŸ₯³πŸ•ΊπŸ½ We're thrilled to announce the first winners of #27DaysOfJAX

πŸ… @rareblog, for running JAX on a Jetson Nano & writing about Spiking NNs in JAX

πŸ… @imPGulati, for writing his first blog ever & contributing

πŸ… @dionhaefner, for 🀯 🀯 🀯 the great read!

Blogs in theπŸ§΅πŸ‘‡
Read 5 tweets
Sep 9, 2021
New podcast episode! πŸ“’

@l2k and @emilymbender dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and the #BenderRule.

πŸŽ₯:

They discuss 4 of Emily's papers ⬇️

1/5
"On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" (Bender, Gebru et al. 2021)

Possible risks associated with bigger and bigger language models, and ways to mitigate those risks.

dl.acm.org/doi/pdf/10.114…

2/5
"Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data" (Bender & Koller, 2020)

Why systems trained only on form (like language models) have no a priori way to learn meaning.

aclanthology.org/2020.acl-main.…

3/5
Read 5 tweets
Nov 5, 2020
We're thrilled to announce that YOLOv5 now comes with @weights_biases baked in!

With no additional lines of code, you now get automatic bounding box debugging, GPU usage and performance metrics, reproducible models, & more!

πŸ‘©β€πŸš€ Try it β†’ colab.research.google.com/github/ultraly…

#deeplearning
What does YOLOv5 + W&B give you?

1. You can monitor how your models and hyperparameters are performing, including automatically tracking:

- Training and validation losses
- Precision, Recall, mAP@0.5, mAP@0.5:0.95
- Learning Rate over time
2. Automatically tracked system metrics like GPU Type,Β GPU Utilization, power, temperature,Β CUDA memory usage; and system metrics like Disk I/0, CPU utilization, RAM memory usage.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(