Eden Marco Profile picture
Customer Engineer @google cloud | Best-seller @udemy Instructor
Jun 30, 2023 19 tweets 6 min read
1/17🧵Demystifying LLM memory🧠 mega thread featuring @LangChainAI 🦜🔗
In this thread I will cover the most popular real-world approaches for integrating memory to our GenAI applications 🤖 2/17 THE GIST:
Memory is basically using in context learning. Its just passing extra context of our conversation/relevant parts of it to the LLM in addition to our query. We augment our prompt with history giving the LLM ad-hoc memory-like abilities such as coreference resolution
Jun 23, 2023 13 tweets 4 min read
0/12 📢🧵Unpopular Opinion thread - Vectorstores are here to stay! 🔐🚀

I've noticed a lot of tweets lately discussing how #LLM s with larger context windows will make vector-databases obsolete. However, I respectfully disagree. Here's why: 1/12 @LangChainAI 🦜🔗 @pinecone 🌲 @weaviate_io @elastic @Redisinc @milvusio let me know what you think😎 I think you will like this.
Jun 17, 2023 15 tweets 7 min read
1/14🧵Real world CHUNKING best practices thread:
🔍 A common question I get is: "How should I chunk my data and what's the best chunk size?" Here's my opinion based on my experience with @LangChainAI 🦜🔗and building production grade GenAI applications. 2/14 Chunking is the process of splitting long pieces of text into smaller, hopefully semantically meaningful chunks. It's essential when dealing with large text inputs, as LLMs often have limitations on the amount of tokens that can be processed at once. (4k,8k,16k,100k) Image
Jun 10, 2023 13 tweets 6 min read
1/13 🧵💡 Ever wondered how to handle token limitations of LLMs? Here's one strategy of the "map-reduce" technique implemented in @LangChainAI 🦜🔗
Let's deep dive! @hwchase17 's your PR is under review again😎 2/13 MapReduce is not new. Famously introduced by @Google , it's a programming model that allows for the processing and generation of large data sets with a parallel, distributed algorithm.
Jun 8, 2023 9 tweets 7 min read
1/8 🚀 Let's go step by step on "Chat with your Repo" assistant powered by @LangChainAI🦜🔗 and @pinecone🌲all running smoothly on @googlecloud☁️ Run- this was demoed at yesterday's HUGE @googlecloud @pinecone event in Tel Aviv 🇮🇱

@hwchase17 counting on you for next time😎 2/8 Step 1? Vectorize your repository files. With using @googlecloud VertexAI embeddings and a couple of lines of @LangChainAI you simply ingest these vectors into @pinecone vectorstore. Image
Jun 5, 2023 6 tweets 4 min read
1/6🌐💡Singularity is here? Just read this blog from @LangChainAI 🦜🔗 featuring @itstimconnors on multi-agent simulation. IMO its amazing to witness how a few "hacks" such as a memory system + some prompt engineering can stimulate human-like behavior 🤖 Image 2/6 inspired by @Stanford 's "Generative Agents" paper-
Every agent in a GPTeam simulation has its unique personality, memories, and directives, creating human-like behavior👥 Image
Jun 3, 2023 9 tweets 6 min read
🧵We all spend too much time scouring LinkedIn/ Twitter before meeting someone new🕵🏽
So, here comes Ice Breaker LLM agent app. Just input a name, it fetches social media to provide a concise summary, interesting facts and a fun icebreaker!
Build on @LangChainAI🦜 & @pinecone🌲 twitter.com/i/web/status/1… 1/7 In just one weekend, this journey I created, shared on @udemy , has blown up in ways I didn’t expect🤖🚀

Teaching how easy it is creating cool & powerful LLM apps with @LangChainAI 🦜 🔗 + @pinecone 🌲, has gone viral 🚀 Image
Jun 2, 2023 11 tweets 6 min read
1/10 🧵💡 Ever wondered how to handle token limitations of LLMs in text summarization? Here's the elegant idea of the "refine" technique in @LangChainAI 🦜🔗, inspired by the "reduce" concept in functional programming. Let's deep dive! 🚀 @hwchase17's your PR is under review 😎 2/10 "Reduce" in python🐍 or "foldl" as it's known in Haskell, is a critical element in functional programming. this is a high order function that has 3 parameters: an iterable, a reduction function, and a starting value.
May 27, 2023 13 tweets 8 min read
🧵🚀 Following my last thread on "in-context learning", now it's time to explain how we can digest our custom data so that LLM’s 🤖 can use it. Spoiler alert- @LangChainAI 🦜 🔗 and a vector store like @pinecone 🌲 will do all the work for us.

Link:
1/12 This is a laser focused thread 🧵 for devs and software engineers. Even if you have zero AI knowledge (like I did just 6 months ago)- I will be simplifying key data concepts for any gen ai application💡
May 19, 2023 11 tweets 7 min read
🧵 Ever wanted to talk with your LLM🤖 on some custom data that it wasn't originally trained on?
@LangChainAI 🦜🔗+ @pinecone 🌲vectorstore will do all the heavy lifting for you. Here's a simplified explanation using a series of 8 illustrations I made.

#GenAI 1/8 Assume you've got documentation of an internal library 📚. When you directly ask the LLM about the library, it can't answer as it wasn't trained on it 🤷‍♂️. No worries! @LangChainAI + @pinecone is here to help 🚀 Image