Participants of the SUPERB Challenge can submit their results to the AAAI 2022 Workshop: The 2nd Self-supervised Learning for Audio and Speech Processing🤖
The winners will be invited to present their methods 🏅🏅🏅!
Hosting an ambitious benchmark like this involves the collaboration of many organizations, and we are super excited to see SUPERB help democratize the development of speech processing technologies!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Starting today all 🤗 Spaces are publicly viewable 🚀 You can find all the amazing demos created as part of the sprint here 👉 huggingface.co/spaces
This has been the largest Hugging Face event, and we're extremely excited by the results. Almost 800 members joined and had almost 100 projects, 170 models & 36 Spaces! 🤯 That is super impressive given the timeframes of the event!
2. Having read the explanation, if this is a project that interests you and that you think you will be able to finish within ~6 weeks - obviously with the help of the Hugging Face team - please send us a message to team@huggingface.co
Blog alert: check out the new guest post by Amog Kamsetty and the @raydistributed team on training a Retrieval Augmented Generation Model with Hugging Face and Ray!
The RAG model by @olapiktus@PSH_Lewis and @facebookai colleagues leverages external knowledge sources like Wikipedia to have direct and dynamic access to information at inference time
Part of this process relies on training a retriever to learn how to find that information
@raydistributed is a framework-agnostic, flexible implementation for ad-hoc concurrent programming, which makes it ideal for scaling up this training, making retrieval 2x faster and drastically improving the scalability of RAG distributed fine-tuning
1/4. Four NLP tutorials are now available on @kaggle
! It's now easier than ever to leverage tokenizers and transformer models like BERT, GPT2, RoBERTa, XLNet, DistilBERT,... for your next competition! 💪💪💪! #NLProc#NLP#DataScience#kaggle