Niels Rogge Profile picture
Jun 29 3 tweets 3 min read
🔥 GroupViT by @nvidia is now available in @huggingface Transformers.

The model is capable of zero-shot semantic segmentation, requiring no pixel-level labels.🤯

For training, only 30M noisy (image, text) pairs were used.

Notebook: tinyurl.com/mrxn9vbx (1/3)
The model can be seen as an extension of @OpenAI's CLIP to semantic segmentation, with a clever grouping in the image encoder.😎

It clearly shows the potential of how language can improve computer vision models!

Docs: huggingface.co/docs/transform…

Models: huggingface.co/models?other=g…
🙏 Shout-out to @Jerry_XU_Jiarui, first author of this paper who contributed the model to the library.

He also created an awesome Space for it (part of #CVPR2022's demo track): huggingface.co/spaces/CVPR/Gr…

(3/3)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Niels Rogge

Niels Rogge Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @NielsRogge

Jan 15
Rewatched @Tesla's AI day recently, and when @karpathy introduced the Transformer used in AutoPilot, it immediately reminded me of @DeepMind's #PerceiverIO which I recently contributed @huggingface. Wonder whether Tesla's approach was inspired by it...
... or whether they were already using this (long) before the paper's introduction. Especially the sentence "you initialize a raster the size of the output space that you'd like and tile it with position encodings "=> this is exactly what Perceiver IO does as well! @drew_jaegle
This idea is brilliant actually: the features of the 8 camera's serve as keys (K) and values (V), while the individual pixels of the output (vector) space (bird's eye view) provide queries (Q) for multi-head attention (tiled with sin/cos position embeddings).
Read 4 tweets
Aug 26, 2021
Happy to share my first @Gradio demo hosted as a @huggingface Space! It showcases @facebookai's new DINO self-supervised method, which allows Vision Transformers to segment objects within an image without ever being trained to do so! Try it yourself!

huggingface.co/spaces/nielsr/…
I've also converted all ViT's trained with DINO from the official repository and uploaded them to the hub: huggingface.co/models?other=d…. Just load them into a ViTModel or ViTForImageClassification ;)
Also, amazed at how ridicously easy @Gradio and @huggingface Spaces are, I got everything set up in 10 minutes
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(