Eden Marco Profile picture
Jun 2 β€’ 11 tweets β€’ 6 min read Twitter logo Read on Twitter
1/10 πŸ§΅πŸ’‘ Ever wondered how to handle token limitations of LLMs in text summarization? Here's the elegant idea of the "refine" technique in @LangChainAI πŸ¦œπŸ”—, inspired by the "reduce" concept in functional programming. Let's deep dive! πŸš€ @hwchase17's your PR is under review 😎
2/10 "Reduce" in python🐍 or "foldl" as it's known in Haskell, is a critical element in functional programming. this is a high order function that has 3 parameters: an iterable, a reduction function, and a starting value.
3/10
"foldl" / "reduce" applies a specified binary operation to successive elements of an iterable, accumulating the result to produce a single output. "reducing the list"

Let's simplify it with an example: Image
4/10 Now, how does @LangChainAI πŸ¦œπŸ”— leverage this concept for handling LLM token limitations?
When faced with a large piece of text, it can first chop it up into manageable chunks. This forms our list for the "reduce" operation. Image
5/10 Then, the reduction function is @LangChainAI calling our LLM. It uses a specific prompt to ask the LLM to either refine an existing summary with additional context, or, if the context isn't useful, to retain the original summary. Image
6/10 So, the LLM's job is to take each chunk, and refine the summary so far, based on the new context provided. It's essentially leveraging the "reduce" concept to distill a meaningful, concise summary from the large text. @LangChainAIπŸ¦œπŸ”— implements all the heavy lifting.
7/10 As for the starting value in this "reduce" operation, it's just an empty string. This allows the @LangChainAIπŸ¦œπŸ”— and the LLM to build up the final summary chunk by chunk, refining along the way.
9/10 One thing to note here: The "refine" technique is sequential and can't run in parallel (like @LangChainAI MapReduce which we will cover in future posts). This could be a downside if you're dealing with a really large volume of data. It also performs several LLM calls.
10/10 However, the upside is, by taking this approach, the summarization output is super meaningful, making it a worthwhile trade-off. This makes @LangChainAI πŸ¦œπŸ”— an excellent solution for meaningful summarization of large texts with just a single chain!

python.langchain.com/en/latest/modu… Image
11/10 This elegant combination of functional programming and AI truly makes @LangChainAI a powerful tool in LLM-powered applications.

Checkout the source code by @hwchase17
implementing this
github.com/hwchase17/lang… Image
Until next time, happy coding! πŸš€
#ENDOFTHREAD

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Eden Marco

Eden Marco Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EdenEmarco177

May 27
πŸ§΅πŸš€ Following my last thread on "in-context learning", now it's time to explain how we can digest our custom data so that LLM’s πŸ€– can use it. Spoiler alert- @LangChainAI 🦜 πŸ”— and a vector store like @pinecone 🌲 will do all the work for us.

Link:
1/12 This is a laser focused thread 🧡 for devs and software engineers. Even if you have zero AI knowledge (like I did just 6 months ago)- I will be simplifying key data concepts for any gen ai applicationπŸ’‘
2/12 Let's talk custom data digestion for LLMs πŸ€–
First off: Embedding models. These condense complex data into meaningful vectors, capturing relationships and semantic meaning. Think of it as a black box for text ➑ vector conversion. (vector = list of floats) Image
Read 13 tweets
May 19
🧡 Ever wanted to talk with your LLMπŸ€– on some custom data that it wasn't originally trained on?
@LangChainAI πŸ¦œπŸ”—+ @pinecone 🌲vectorstore will do all the heavy lifting for you. Here's a simplified explanation using a series of 8 illustrations I made.

#GenAI
1/8 Assume you've got documentation of an internal library πŸ“š. When you directly ask the LLM about the library, it can't answer as it wasn't trained on it πŸ€·β€β™‚οΈ. No worries! @LangChainAI + @pinecone is here to help πŸš€ Image
2/8: We load the entire package documentation into a vectorstore like @pinecone 🌲. This involves transforming the text into vectors, aka 'embeddings'. Now, these vectors hover around, representing our texts πŸ—‚οΈ Image
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(