We've been getting lots of questions on how to contribute models to🤗Transformers.

Recently we started to publish model-specific recipes on how to do so!

If you want to get better at open-source contributions and want to contribute to🤗Transformers, here is how it works: 👇
1. Watch out for open proposals to add a model here:
github.com/huggingface/tr…
2. Having read the explanation, if this is a project that interests you and that you think you will be able to finish within ~6 weeks - obviously with the help of the Hugging Face team - please send us a message to team@huggingface.co
3. By the start date of the model proposal, we will select a motivated contributor and if selected you will work together with a member of the Hugging Face team on adding your first model to🤗Transformers
4. Upon completion not only will you have done a major open-source contribution 🎉& become an expert on a specific model, but you will also receive a lifelong pro Hugging Face account, a certificate, and some unique SWAG
5. We are very excited to announce that @7vasudevgupta has been chosen to closely work together with @PatrickPlaten to integrate @GoogleAI's BigBird!

If you are keen to hear from us about new model proposals, feel free to shot us a mail at team@huggingface.co🤗

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Hugging Face

Hugging Face Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @huggingface

10 Feb
Blog alert: check out the new guest post by Amog Kamsetty and the @raydistributed team on training a Retrieval Augmented Generation Model with Hugging Face and Ray!

huggingface.co/blog/ray-rag
The RAG model by @olapiktus @PSH_Lewis and @facebookai colleagues leverages external knowledge sources like Wikipedia to have direct and dynamic access to information at inference time

Part of this process relies on training a retriever to learn how to find that information
@raydistributed is a framework-agnostic, flexible implementation for ad-hoc concurrent programming, which makes it ideal for scaling up this training, making retrieval 2x faster and drastically improving the scalability of RAG distributed fine-tuning

Go try it out for yourself!
Read 4 tweets
6 Mar 20
1/4. Four NLP tutorials are now available on
@kaggle
! It's now easier than ever to leverage tokenizers and transformer models like BERT, GPT2, RoBERTa, XLNet, DistilBERT,... for your next competition! 💪💪💪! #NLProc #NLP #DataScience #kaggle
2/4. Tokenizers - Training your own tokenizer kaggle.com/funtowiczmo/hu…
3/4. Transformers - Getting started with transformers kaggle.com/funtowiczmo/hu…
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!