Finetuning the embedding model can allow for more meaningful embedding representations, leading to better retrieval performance.
@llama_index has abstraction for finetuning sentence transformers embedding models that makes this process quite seamless.
Let's see how it works 👇
@llama_index Finetuning means updating the model weights themselves over a set of data corpus to make the model work better for specific use-cases.
E.g. for embedding ArXiv papers, we want the embeddings to align semantically with the concepts and not filler words like “This paper is…”.
@llama_index .@llama_index has guides on how to finetune embeddings in different ways:
- finetune the embedding model itself (only sentence transformers)
- finetune an adapter over any black-box embedding model (stay tuned for this one 🔥)
@llama_index 3 Steps for finetuning embeddings:
1. Prepare the data via generate_qa_embeddings_pairs() 2. finetune model via SentenceTransformersFinetuneEngine 3. Evaluate the model
@llama_index Here we use two pdfs: lyft.pdf and uber.pdf.
we'll use lyft.pdf to create the training set and evaluate the finetuned model using evaluation set created from uber.pdf.
we create nodes from both training and evaluation pdfs.
@llama_index Next we use generate_qa_embedding_pairs() to create the training and evaluation datasets from the nodes.
For each chunk, synthetic queries are created for that chunk via LLMs.
Each pair of (generated question, text chunk used as context) becomes a datapoint in the datasets.
@llama_index Next we use the SentenceTransformersFinetuneEngine to create the finetuning engine.
We pass it the necessary parameters like the train and evaluation dataset, model to finetune, path to output the finetuned model etc.
@llama_index Finally, we finetune it using the engine created before and evaluate the new model comparing it with the base model and OpenAIEmbeddings
@llama_index In the hit rate metric, new model performs significantly well from the base model it was finetuned on and even performs almost as good as openai embedding model.
Previously we've seen @LangChainAI ParentDocumentRetriever that creates smaller chunks from a document and links them back to the initial documents during retrieval.
MultiVectorRetriever is a more customizable version of that. Let's see how to use it 🧵👇
@LangChainAI ParentDocumentRetriever automatically creates the small chunks and links their parent document id.
If we want to create some additional vectors for each documents, other than smaller chunks, we can do that and then retrieve those using MultiVectorRetriever.
While splitting the raw text for Retrieval Augmented Generation (RAG), what should be the ideal length of each chunk? What’s the sweet spot?
Strike a balance between small vs large chunks using @LangChainAI ParentDocumentRetriever
Let's see how to use it 👇🧵
The issue:
- smaller chunks reflect more accurate semantic meaning after creating embedding
- but they sometimes might lose the bigger picture and might sound out of context, making it difficult for the LLM to properly answer user's query with limited context per chunk.
@LangChainAI ParentDocumentRetriever addresses this issue by creating embedding from the smaller chunks only as they capture better semantic meaning.
But while plugging into the LLM input, it uses the larger chunks with better context.