2. Having read the explanation, if this is a project that interests you and that you think you will be able to finish within ~6 weeks - obviously with the help of the Hugging Face team - please send us a message to team@huggingface.co
3. By the start date of the model proposal, we will select a motivated contributor and if selected you will work together with a member of the Hugging Face team on adding your first model to🤗Transformers
4. Upon completion not only will you have done a major open-source contribution 🎉& become an expert on a specific model, but you will also receive a lifelong pro Hugging Face account, a certificate, and some unique SWAG
5. We are very excited to announce that @7vasudevgupta has been chosen to closely work together with @PatrickPlaten to integrate @GoogleAI's BigBird!
If you are keen to hear from us about new model proposals, feel free to shot us a mail at team@huggingface.co🤗
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Blog alert: check out the new guest post by Amog Kamsetty and the @raydistributed team on training a Retrieval Augmented Generation Model with Hugging Face and Ray!
The RAG model by @olapiktus@PSH_Lewis and @facebookai colleagues leverages external knowledge sources like Wikipedia to have direct and dynamic access to information at inference time
Part of this process relies on training a retriever to learn how to find that information
@raydistributed is a framework-agnostic, flexible implementation for ad-hoc concurrent programming, which makes it ideal for scaling up this training, making retrieval 2x faster and drastically improving the scalability of RAG distributed fine-tuning
1/4. Four NLP tutorials are now available on @kaggle
! It's now easier than ever to leverage tokenizers and transformer models like BERT, GPT2, RoBERTa, XLNet, DistilBERT,... for your next competition! 💪💪💪! #NLProc#NLP#DataScience#kaggle