@AmazonScience@alexa99 LLMs are notoriously difficult to control. This work is an effort to fix that.
We create CHRT : a novel framework to attribute control LLMs using learned transformation blocks.
It can be used to minimize toxicity, maximize positive sentiment and more.
The approach has minimal loss in linguistic quality while achieving high attribute control.
Also has the least latency delta as compared to all other included baselines, something that makes it ideal for production environments.
How do we learn the transformation blocks?
Through a joint weighted loss of contrastive and preservation loss. In a way, our work can be imagined as a distilled version of the DExperts paper's approach with the ability to combine (or skip) multiple transformation blocks.
I like to imagine these blocks as "lenses". Each lens moves the hidden representation towards a certain latent subspace that achieves an attribute.
How do we evaluate our approach?
We do it both automatically (using attribute classifiers) and through a large scale human study performed on Amazon mechanical turk. We compared our work with 5 existing baselines and outperform them in many aspects.
This framework requires access to model weights and is inapplicable to instruction following LLM APIs like @OpenAI and @GoogleAI Bard. Prompt engineering with alignment is the way to go for them. NLP Research moves so fast!!
@OpenAI@GoogleAI Finally, I am thankful to my mentors and Amazon for giving me an opportunity to work on such an interesting project. I learned a lot.
• • •
Missing some Tweet in this thread? You can try to
force a refresh