How to get URL link on X (Twitter) App
https://twitter.com/rasbt/status/1642161887889567745Prefix finetuning falls into the "soft prompt" category. In regular hard prompt tuning, we optimize the choice of input tokens to get the desired response.
https://twitter.com/rasbt/status/1641801360462041089The intuition is that a proper context can steer the LLM towards performing a desired task without the need of updating the LLM's parameters. We learn a set of tokens called a "prefix" that (conditioned on by the model) guides the model's output toward the desired behavior.
https://twitter.com/rasbt/status/1637803700944093184The next trend will likely be extending the capabilities with vision, other modalities, and multitask training.
https://twitter.com/rasbt/status/1639625228622917632BERTScore can be used for translations and summaries, and it captures the semantic similarity better than traditional metrics like BLEU and ROUGE. In particular, it's more robust to paraphrasing.
https://twitter.com/rasbt/status/1639271663735828483Where BLEU is commonly used for translation tasks, ROUGE is a popular metric for scoring text summaries. Similar to BLEU, it's usually applied to n-grams, but for simplicity, we will focus on 1-grams (single words). There are quite some similarities between BLEU and ROUGE.
https://twitter.com/rasbt/status/1638895926399107073BLEU was originally developed to capture or automate the essence of human evaluation of translated text.