Rohan Profile picture
Aug 26, 2023 13 tweets 4 min read Read on X
Previously we've seen @LangChainAI ParentDocumentRetriever that creates smaller chunks from a document and links them back to the initial documents during retrieval.

MultiVectorRetriever is a more customizable version of that. Let's see how to use it 🧵👇 Image
@LangChainAI ParentDocumentRetriever automatically creates the small chunks and links their parent document id.

If we want to create some additional vectors for each documents, other than smaller chunks, we can do that and then retrieve those using MultiVectorRetriever.
We can customize how these additional vectors are created for each parent document. Here're some ways @LangChainAI mentioned in their documentation.

- smaller chunks
- store the summary vector of each document
- store the vectors of hypothetical questions for each documents
Now let's try to understand the example code from langchain documentation 👇
First we create the retriever itself.

Here we pass the
- vectorstore to store all the vectors for the documents
- docstore to store the documents themselves
- id_key is the key of the metadata field which will be used to store the document id for each vector Image
Also we create unique uuid for each of the documents.

We'll use these ids to store the documents in the docstore.

MultiVectorRetriever will use these ids to retrieve the documents from the vector similarity search. Image
Now let's implement the ParentDocumentRetriever using MultiVectorRetriever

- iterate over each document
- split the document to get the children chunks
- store each small chunk in the vectorstore, with the parent doc_id as metadata Image
As MultiVectorRetriever is more flexible and customizable, we need to manually add the additional vectors to the vectorstore and set the doc_id of the associated document as a metadata field.

Also we need to add the docs with their id to the docstore. Image
We can also create a summary for each document.

Oftentimes a summary may be able to capture more accurately what a chunk is about, leading to better retrieval.
Image
Image
Also as we'll be matching the vectors with user's query embedding vector, we might get better results if we create some hypothetical user queries of a particular document and store them in the vectorstore.
Image
Image
Based on the specific use case, we can create other vectors as well for each document.

For these vectors, we need to make sure to add the doc_id as the metadata. And MultiVectorRetriever will handle the rest to retrieve the initial documents from these vectors.
MultiVectorRetriever documentation:

python.langchain.com/docs/modules/d…
Thanks for reading.

I write about AI, ChatGPT, LangChain etc. and try to make complex topics as easy as possible.

Stay tuned for more ! 🔥 #ChatGPT #LangChain

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rohan

Rohan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @clusteredbytes

Feb 4, 2024
Introducing LlamaBot 🔥🚀

An open-source Discord bot that listens to your conversations, remembers them and answers your questions across a discord server, created using @llama_index (inspired by @seldo 's LlamaBot for Slack)

Stack used: LlamaIndex, Gemini Pro, @qdrant_engine
@llama_index @seldo @qdrant_engine Features:

- We can ask LlamaBot questions about what's going on across the server

- We can tell LlamaBot to start/stop listening to conversations.

- We can check current listening status, or ask the bot to forget everything from the server. Image
@llama_index @seldo @qdrant_engine GitHub repo of the bot:

Invite the bot to your server:

Full step-by-step guide on how to build this bot: github.com/rsrohan99/llam…
discord.com/api/oauth2/aut…
clusteredbytes.pages.dev/posts/2024/cre…
Read 4 tweets
Jan 25, 2024
The "Dense X Retriever" paper shows that it significantly outperforms the traditional chunk-based retriever

@LoganMarkewich created an awesome LlamaPack that lets you get started with this proposition-based retriever in no time using @llama_index 🔥

Let's see how it works 👇🧵 Image
@LoganMarkewich @llama_index Passage and sentence based retrieval has their limitations.

Though passages contain more context, they often include extraneous details, which distracts the LLM during response synthesis Image
@LoganMarkewich @llama_index Sentences on the other hand might be fine-grained, but lacks context.

And sentences could be complex and compounded too.

This is where "Propositions" come into play 👇 Image
Read 10 tweets
Jan 10, 2024
Previously I've talked about the amazing Ingestion Pipeline from @llama_index.

Here's how to use Redis (@Redisinc) as the docstore, vectorstore and cache for the pipeline.

LlamaIndex abstractions make it really easy to just use Redis for the entire pipeline 🔥👇 Image
@llama_index @Redisinc We need to pass the following arguments to the ingestion pipeline:

cache: we pass a RedisCache instance as the argument of IngestionCache

docstore: An instance of RedisDocumentStore

vectorstore: An instance of RedisVectorStore
@llama_index @Redisinc Check out the official documentation: docs.llamaindex.ai/en/stable/exam…
Read 4 tweets
Nov 23, 2023
Multi-Modal AI is rapidly taking over 🔥🚀

It’s truly amazing how fast @llama_index incorporated a robust pipeline for multi-modal RAG capabilities.

Here’s a beginners-friendly guide to get started with multi-modal RAG using LlamaIndex 👇🧵 Image
@llama_index First let’s start with some simple stuff.

We just want to ask questions about our images.

OpenAIMultiModal is a wrapper around OpenAI’s latest vision model that lets us do exactly that. Image
@llama_index Told you it was easy.

LlamaIndex handles all the underlying logic for converting those image_documents to compatible format for the multi-modal llm.

But there’s an issue !! 👇
Read 11 tweets
Oct 27, 2023
Previously we've seen how to improve retrieval by funetuning an embedding model.

@llama_index also supports finetuning an adapter on top of existing models, which lets us improve retrieval without updating our existing embeddings. 🚀

Let's see how it works 👇🧵 Image
@llama_index For adapters, we pull apart every single layer of the transformer and add randomly initialized new weights.

Then, instead of finetuning all the weights, we freeze the weights of the pre-trained model, only finetune the newly added weights.

We apply similar technique here 👇
@llama_index Here we "freeze" the document embeddings, and then we train a transformation on the query embedding instead.

Thus we're not limited to only Sentence Transformer models.

We can apply this on top of any existing model without re-embedding existing data. Image
Read 13 tweets
Oct 19, 2023
Extract tables from documents using @llama_index UnstructuredElementParser and then use RecursiveRetriever to enable hybrid tabular/semantic queries and also comparisons over multiple docs.

Let's see how to use this advanced RAG technique 🧵👇 Image
@llama_index First we load the documents.

Then we create the new UnstructuredElementNodeParser from LLamaIndex. Image
@llama_index This parser:

- extracts tables from data
- converts those tables to Dataframe
- for each of those tables, it creates 2 nodes
- one Table Node that contains the Dataframe as string
- another IndexNode that stores the summary of that table and a reference to that Table Node
Image
Image
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(