Eden Marco Profile picture
Jun 17 15 tweets 7 min read Twitter logo Read on Twitter
1/14🧵Real world CHUNKING best practices thread:
🔍 A common question I get is: "How should I chunk my data and what's the best chunk size?" Here's my opinion based on my experience with @LangChainAI 🦜🔗and building production grade GenAI applications.
2/14 Chunking is the process of splitting long pieces of text into smaller, hopefully semantically meaningful chunks. It's essential when dealing with large text inputs, as LLMs often have limitations on the amount of tokens that can be processed at once. (4k,8k,16k,100k) Image
3/14 Eventually, we store all chunks in a vectorstore like @pinecone🌲 and perform similarity search on them then using the results as context to the LLM.
4/14 This approach, known as in-context learning or RAG (Retrieval-Augmented Generation), helps the language model answer with contextual understanding. 🧩🔎(check my thread on RAG) Image
5/14 Ideally, we want to keep semantically related pieces of data together when chunking. In @LangChainAI🦜🔗 , we use TextSplitters for chunking. Image
6/14 We need to specify to the @LangChainAI TextSplitters how we want to split the text and create the chunks. We can define the chunk size as well as the option for chunk overlap, although personally, I don't often utilize the chunk overlap feature. Image
7/14 The most effective strategy I've found is chunking by the existing document formatting.
If we are chunking python files and wikipedia text files we ought to chunk them differently according to their file type. Image
8/14 Example: In Python, a good separator for chunking can be '\ndef' to represent a function. It's considered best practice to keep functions short, typically no longer than 20 lines of code (unless, of course, you're a Data Scientist and have a knack for longer functions 😂 Image
9/14 So here the chunk size of 300 can be a good heuristic IMO.

Remember there is no silver bullet☑️ and you MUST benchmark everything you do to get optimal results.
10/14 An advantage of @LangChainAI 🦜🔗 text splitters is our ability to create dynamically optimized splitters based on needs so we have full flexibility here Image
11/14 However Imagine having a ready to go- text splitter specifically tailored to you file extension: .md, .html, or .py files.
@hwchase17 and @LangChainAI 🦜🔗 team, please consider implementing this!) This can saves us lazy devs tons of time with a "best practice" built in. Image
12/14 Rule of thumb👍: When determining the chunk size --> balance. Size should be small enough to ensure effective processing by the LLM, while also being long enough to provide humans with a clear understanding of the semantic meaning within each chunk.
13/15 For text files I found that 500 works well.
When chunking is done correctly, it greatly improves information retrieval. Remember to consider the type of file you're working with when chunking. Each file format requires a different set of rules for optimal chunking.
14/15 I teach @LangChainAI 🦜🔗 elaborately in my @udemy course with almost 5k students and 630+ reviews
udemy.com/course/langcha…

Twitter only limited discount:
TWITTER9DCC71C67A9AA
15/15 What are your best @LangChainAI🦜🔗 chunking strategies?

would love to hear your thought😎😎😎
@pinecone 🌲 Would love to hear your take on this as well.

#ENDOFTHREAD🧵🧵🧵

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eden Marco

Eden Marco Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EdenEmarco177

Jun 10
1/13 🧵💡 Ever wondered how to handle token limitations of LLMs? Here's one strategy of the "map-reduce" technique implemented in @LangChainAI 🦜🔗
Let's deep dive! @hwchase17 's your PR is under review again😎
2/13 MapReduce is not new. Famously introduced by @Google , it's a programming model that allows for the processing and generation of large data sets with a parallel, distributed algorithm.
3/13 In essence, it divides work into small parts that can be done simultaneously (the “mapping”) and then merge the intermediate results back to a one final result (“reducing”). Image
Read 13 tweets
Jun 8
1/8 🚀 Let's go step by step on "Chat with your Repo" assistant powered by @LangChainAI🦜🔗 and @pinecone🌲all running smoothly on @googlecloud☁️ Run- this was demoed at yesterday's HUGE @googlecloud @pinecone event in Tel Aviv 🇮🇱

@hwchase17 counting on you for next time😎
2/8 Step 1? Vectorize your repository files. With using @googlecloud VertexAI embeddings and a couple of lines of @LangChainAI you simply ingest these vectors into @pinecone vectorstore. Image
3/8 Now, we use @googlecloud VertexAI embeddings along with context retrieved from @pinecone to augment the user's original prompt to @googlecloud PaLM 2 LLM. This enables is also called in context learning. With @LangChainAI again is just a couple of lines of code Image
Read 9 tweets
Jun 5
1/6🌐💡Singularity is here? Just read this blog from @LangChainAI 🦜🔗 featuring @itstimconnors on multi-agent simulation. IMO its amazing to witness how a few "hacks" such as a memory system + some prompt engineering can stimulate human-like behavior 🤖 Image
2/6 inspired by @Stanford 's "Generative Agents" paper-
Every agent in a GPTeam simulation has its unique personality, memories, and directives, creating human-like behavior👥 Image
3/6 📚💬 "The appearance of an agentic human-like entity is an illusion. Created by a memory system and a fe of distinct Language Model prompts."- from GPTeam blog. This ad-hoc human behaviour is mind blowing🤯🤯🤯
Read 6 tweets
Jun 3
🧵We all spend too much time scouring LinkedIn/ Twitter before meeting someone new🕵🏽
So, here comes Ice Breaker LLM agent app. Just input a name, it fetches social media to provide a concise summary, interesting facts and a fun icebreaker!
Build on @LangChainAI🦜 & @pinecone🌲 twitter.com/i/web/status/1…
1/7 In just one weekend, this journey I created, shared on @udemy , has blown up in ways I didn’t expect🤖🚀

Teaching how easy it is creating cool & powerful LLM apps with @LangChainAI 🦜 🔗 + @pinecone 🌲, has gone viral 🚀 Image
2/7 Thousands of students, 450+ reviews⭐ , a @udemy best seller tag, and an inbox full of developers from leading companies now equipped and building GenAI solutions 🤖
Read 9 tweets
Jun 2
1/10 🧵💡 Ever wondered how to handle token limitations of LLMs in text summarization? Here's the elegant idea of the "refine" technique in @LangChainAI 🦜🔗, inspired by the "reduce" concept in functional programming. Let's deep dive! 🚀 @hwchase17's your PR is under review 😎
2/10 "Reduce" in python🐍 or "foldl" as it's known in Haskell, is a critical element in functional programming. this is a high order function that has 3 parameters: an iterable, a reduction function, and a starting value.
3/10
"foldl" / "reduce" applies a specified binary operation to successive elements of an iterable, accumulating the result to produce a single output. "reducing the list"

Let's simplify it with an example: Image
Read 11 tweets
May 27
🧵🚀 Following my last thread on "in-context learning", now it's time to explain how we can digest our custom data so that LLM’s 🤖 can use it. Spoiler alert- @LangChainAI 🦜 🔗 and a vector store like @pinecone 🌲 will do all the work for us.

Link:
1/12 This is a laser focused thread 🧵 for devs and software engineers. Even if you have zero AI knowledge (like I did just 6 months ago)- I will be simplifying key data concepts for any gen ai application💡
2/12 Let's talk custom data digestion for LLMs 🤖
First off: Embedding models. These condense complex data into meaningful vectors, capturing relationships and semantic meaning. Think of it as a black box for text ➡ vector conversion. (vector = list of floats) Image
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(