Benjamin Bossan Profile picture

Apr 14, 6 tweets

Today, we released PEFT v0.19.0 and it's a big one. Not only did we add 9 new PEFT methods, the release also contains a bunch of improvements to make PEFT more useful. Check the thread for details:

The release contains new functions to convert non-LoRA weights into LoRA weights. This allows them to be used in packages like Diffusers and vLLM that only support LoRA. Find more details here: huggingface.co/docs/peft/main…

LoRA fine-tuning can introduce so called "intruder dimensions" which contribute to forgetting (). We now have a utility function to remove intruder dimension, `reduce_intruder_dimension`. Call this on a fine-tuned LoRA model to reduce forgetting.huggingface.co/papers/2410.21…

A selection of improvements to LoRA:

- support for Tensor Parallelism
- Tensor Engine quantization
- better handling of tied weights
- support fp8 dtypes
- LoRA-GA initialization by @sambhavdixitpro

Moreover, for prefix tuning, we provide better initialization options.

We also added 9 new PEFT methods. There is not enough room here to describe them in detail, but here is a list:

- GraLoRA by github.com/yeonjoon-jung01
- BD-LoRA by github.com/Conzel
- Cartridges and TinyLoRA by @krasul
- PVeRA by @leofillioux

- PSOFT by github.com/fei407
- Lily and PEANut by @tmux_1
- AdaMSS by github.com/LonglongaaaGo

Find a short description of these PEFT methods and the full release notes here: github.com/huggingface/pe…

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling