Artidoro Pagnoni Profile picture
May 24 8 tweets 4 min read Twitter logo Read on Twitter
4-bit QLoRA is here to equalize the playing field for LLM exploration. You can now fine-tune a state-of-the-art 65B chatbot on one GPU in 24h.

Paper: arxiv.org/abs/2305.14314
Code and Demo: github.com/artidoro/qlora
QLoRA uses a frozen 4-bit base model with adapters. We backpropagate through the 4-bit weights into the adapters. QLoRA incorporates the NF4 datatype, double-quantization, and paged optimizers. We show it is on par with 16-bit finetuning at a fraction of the memory footprint. Image
QLoRA tuning on the OpenAssistant dataset produces a state-of-the-art chatbot.

According to GPT-4 and human annotators, our models outperform all other open-source systems and are even competitive with ChatGPT. Image
The paper arxiv.org/abs/2305.14314 contains lots of insights and considerations about instruction tuning and chatbot evaluation and points to areas for future work.
In particular:
- instruction tuning datasets are not necessarily helpful for chatbot performance
- quality of data rather than quantity is important for chatbots
- multitask QA benchmarks like MMLU are not always correlated with chatbot performance
- both human and automated evaluations are challenging when comparing strong systems
- using large eval datasets with many prompts, like the OA benchmark, is important for evaluation
- many possible improvements to our setup including RLHF explorations
Guanaco is *far from perfect*, but the fact that it is remarkably easy to train with QLoRA while obtaining strong chatbot performance makes it a perfect starting point for future research.
Thank you to @Tim_Dettmers @universeinanegg @LukeZettlemoyer and the many people who made this project possible! In particular, @younesbelkada and the @huggingface team. It's been amazing working with you

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Artidoro Pagnoni

Artidoro Pagnoni Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(