1/10 🧵💡 Ever wondered how to handle token limitations of LLMs in text summarization? Here's the elegant idea of the "refine" technique in @LangChainAI 🦜🔗, inspired by the "reduce" concept in functional programming. Let's deep dive! 🚀 @hwchase17's your PR is under review 😎
2/10 "Reduce" in python🐍 or "foldl" as it's known in Haskell, is a critical element in functional programming. this is a high order function that has 3 parameters: an iterable, a reduction function, and a starting value.
3/10
"foldl" / "reduce" applies a specified binary operation to successive elements of an iterable, accumulating the result to produce a single output. "reducing the list"
Let's simplify it with an example:
4/10 Now, how does @LangChainAI 🦜🔗 leverage this concept for handling LLM token limitations?
When faced with a large piece of text, it can first chop it up into manageable chunks. This forms our list for the "reduce" operation.
5/10 Then, the reduction function is @LangChainAI calling our LLM. It uses a specific prompt to ask the LLM to either refine an existing summary with additional context, or, if the context isn't useful, to retain the original summary.
6/10 So, the LLM's job is to take each chunk, and refine the summary so far, based on the new context provided. It's essentially leveraging the "reduce" concept to distill a meaningful, concise summary from the large text. @LangChainAI🦜🔗 implements all the heavy lifting.
7/10 As for the starting value in this "reduce" operation, it's just an empty string. This allows the @LangChainAI🦜🔗 and the LLM to build up the final summary chunk by chunk, refining along the way.
9/10 One thing to note here: The "refine" technique is sequential and can't run in parallel (like @LangChainAI MapReduce which we will cover in future posts). This could be a downside if you're dealing with a really large volume of data. It also performs several LLM calls.
10/10 However, the upside is, by taking this approach, the summarization output is super meaningful, making it a worthwhile trade-off. This makes @LangChainAI 🦜🔗 an excellent solution for meaningful summarization of large texts with just a single chain!
1/17🧵Demystifying LLM memory🧠 mega thread featuring @LangChainAI 🦜🔗
In this thread I will cover the most popular real-world approaches for integrating memory to our GenAI applications 🤖
2/17 THE GIST:
Memory is basically using in context learning. Its just passing extra context of our conversation/relevant parts of it to the LLM in addition to our query. We augment our prompt with history giving the LLM ad-hoc memory-like abilities such as coreference resolution
Coreference resolution:
When someone says "@hwchase17 just tweeted. He wrote about @LangChainAI ," we effortlessly understand that "he" refers to @hwchase17 based on our coreference resolution skills. It's a cognitive process that enables effective communication & understanding
0/12 📢🧵Unpopular Opinion thread - Vectorstores are here to stay! 🔐🚀
I've noticed a lot of tweets lately discussing how #LLM s with larger context windows will make vector-databases obsolete. However, I respectfully disagree. Here's why:
1/12 @LangChainAI 🦜🔗 @pinecone 🌲 @weaviate_io @elastic @Redisinc @milvusio let me know what you think😎 I think you will like this.
2/12: Too much context hurts performance. As the context window expands, #LLM s can "forget" information from the beginning of the prompt. With contexts larger than ~50k tokens, this becomes a challenge.
1/14🧵Real world CHUNKING best practices thread:
🔍 A common question I get is: "How should I chunk my data and what's the best chunk size?" Here's my opinion based on my experience with @LangChainAI 🦜🔗and building production grade GenAI applications.
2/14 Chunking is the process of splitting long pieces of text into smaller, hopefully semantically meaningful chunks. It's essential when dealing with large text inputs, as LLMs often have limitations on the amount of tokens that can be processed at once. (4k,8k,16k,100k)
3/14 Eventually, we store all chunks in a vectorstore like @pinecone🌲 and perform similarity search on them then using the results as context to the LLM.
1/13 🧵💡 Ever wondered how to handle token limitations of LLMs? Here's one strategy of the "map-reduce" technique implemented in @LangChainAI 🦜🔗
Let's deep dive! @hwchase17 's your PR is under review again😎
2/13 MapReduce is not new. Famously introduced by @Google , it's a programming model that allows for the processing and generation of large data sets with a parallel, distributed algorithm.
3/13 In essence, it divides work into small parts that can be done simultaneously (the “mapping”) and then merge the intermediate results back to a one final result (“reducing”).
1/8 🚀 Let's go step by step on "Chat with your Repo" assistant powered by @LangChainAI🦜🔗 and @pinecone🌲all running smoothly on @googlecloud☁️ Run- this was demoed at yesterday's HUGE @googlecloud@pinecone event in Tel Aviv 🇮🇱
2/8 Step 1? Vectorize your repository files. With using @googlecloud VertexAI embeddings and a couple of lines of @LangChainAI you simply ingest these vectors into @pinecone vectorstore.
3/8 Now, we use @googlecloud VertexAI embeddings along with context retrieved from @pinecone to augment the user's original prompt to @googlecloud PaLM 2 LLM. This enables is also called in context learning. With @LangChainAI again is just a couple of lines of code
1/6🌐💡Singularity is here? Just read this blog from @LangChainAI 🦜🔗 featuring @itstimconnors on multi-agent simulation. IMO its amazing to witness how a few "hacks" such as a memory system + some prompt engineering can stimulate human-like behavior 🤖
2/6 inspired by @Stanford 's "Generative Agents" paper-
Every agent in a GPTeam simulation has its unique personality, memories, and directives, creating human-like behavior👥
3/6 📚💬 "The appearance of an agentic human-like entity is an illusion. Created by a memory system and a fe of distinct Language Model prompts."- from GPTeam blog. This ad-hoc human behaviour is mind blowing🤯🤯🤯