Harrison Chase Profile picture
Apr 17, 2023 11 tweets 5 min read Read on X
🤖Generative Agents🤖

Last week, Park et all released “Generative Agents”, a paper simulating interactions between tens of agents

We gave it a close read, and implemented one of the novel components it introduced: a long-term, reflection-based memory system

🧵 ImageImage
If you haven’t read the paper, you absolutely should

Link: arxiv.org/abs/2304.03442

"We demonstrate through ablation that the components of our agent architecture—observation, planning, and reflection—each contribute critically to the believability of agent behavior"
One of the novel components was a "architecture that makes it possible for generative agents to remember, retrieve, reflect, interact with other agents" - this is what we tried to recreate

Notebook here: python.langchain.com/en/latest/use_…
As shown below, there are a lot of parts. Two notable things:

🪞a reflection step
🎊a retrieval step

The reflections contribute to the agent's memory stream, which is then retrieved and used to act Image
Lets talk about retrieval first. We’ve introduced a lot of different retrievers over the past few weeks, how does the one used in this paper compare?

It can essentially be viewed as a "Time Weighted VectorStore Retriever" - a retriever than combines recency with relevance
As such, we implemented a standalone TimeWeightedVectorStoreRetriever in LangChain

You can see below that you can specify a decay rate to adjust between recency and relevancy

Docs: python.langchain.com/en/latest/modu…

So now that we have this retriever, how is it used in memory? ImageImage
There are two key methods: `add_memory` and `summarize_related_memories`

When an agent makes an observation, it stores the memory:

1. a LLM scores the memory’s importance (1 for mundane, 10 for poignant)
2. Observation and importance are stored within the retrieval system
When an agent responds to an observation:

1. Generates query(s) for retriever, which fetches documents based on salience, recency, and importance.
2. Summarizes the retrieved information
3. Updates the last_accessed_time for the used documents.
So let’s now see this in action! We can simulate what happens by feeding observations to the agent and seeing how the summary of the agent is updated over time

Here we do a simple update of only a few observations ImageImage
We can take this to the extreme even more and update with ~20 or so observations (a full day’s worth)

We can then “interview” the agent before and after the day - notice the change in the agent’s responses! ImageImage
Finally, we can create a simulation of two agents talking to each other.

This is a far cry from the ~20 or so agents the paper simulated, but it's still interesting to see the conversation + interview them before and after ImageImageImage

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Harrison Chase

Harrison Chase Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @hwchase17

Oct 8, 2024
🚀We're launching "long-term memory" support in LangGraph

At its core, long-term memory is "just" a persistent document store that lets you *put*, *get*, and *search* for memories you've saved

Why so simple?

🧵 Image
🧠The idea of memory is tantalizing, but also really vague

What does it even mean for an application to have memory?

Much like agents, there's a lot of hype and interest in this area, without a clear definition of what is actually means
🥇We saw that applications that successfully implemented memory were implementing it in an application specific way

A coding app may care about a users language, preferred libraries, and proficiency level

A companion app may care about a users name, age, friends, etc
Read 9 tweets
Oct 14, 2023
⛓️Chain of Verification

A great new paper from Meta on a prompting technique to reduce hallucinations

🦜🔗Sourajit Roy Chowdhury implemented this in @LangChainAI **along with some improvements**

📃And he wrote a blog on it

🧵Lets dive in (this is why I love the LC community!)


Image
Image
Image
Image
Most important link: the GitHub repo

This is a well documented, well implemented repo - that takes a lot of time

Big 👏 and ⭐️ to Sourajit for not only implementing this paper, but implementing in such a comprehensive and helpful way

github.com/ritun16/chain-…
First, I would start off by checking out the original paper:



I would also look at great threads on the topic:

from @arankomatsuzaki

from @jaseweston

from @johnjnay arxiv.org/abs/2309.11495



Image
Read 7 tweets
Sep 21, 2023
🤖Agents from scratch

We've rewritten all our 8 agent types using LangChain Expression LangChain and prompts from the Hub

This makes them more modular, understandable, and therefor more customizable

This customizability is crucial for teams looking to go to production

Long 🧵
Image
Image
If you want to jump right into it, we've updated the "Getting Started" page for agents to go over all the individual components

We then show how to create agents from these individual components

Is a great resource to build up a solid base understanding

python.langchain.com/docs/modules/a…
Why do this?

One thing we've seen is that while default agents make it easy to prototype, a lot of teams want to customize some component of them in order to improve the accuracy of THEIR application

In order enable this, we exposed all the core components
Read 13 tweets
Aug 25, 2023
🌲Multi Vector Retriever

The basic idea: you store multiple embedding vectors per document. How do you generate these embeddings?

👨‍👦Smaller chunks (this is ParentDocumentRetriever)
🌞Summary of document
❓Hypothetical questions
🖐️Manually specified text snippets

Quick 🧵 Image
Language models are getting larger and larger context windows

This is great, because you can pass bigger chunks in!

But if you have larger chunks, then a single embedding per chunk can start to fall flat, as there can be multiple distinct topics in that longer passage
One solution is to start creating not one but MULTIPLE embeddings per document

This was the basic realization with our ParentDocumentRetriever ~2 weeks ago, but it's really much more general than that

There are many ways to create multiple embeddings

Read 10 tweets
Aug 15, 2023
🚢Benchmarking Question/Answering Over CSV Data

Deep dive on improving an application that does question answering over CSV data:

📜3000 word blog post
🎥30min video
🛌Open sourced eval data
🎬Open sourced code for gathering feedback
🤖Open sourced final agent code

🧵 Image
Blog:

YouTube: https://t.co/JxUrrvzBdi

Code & data used: https://t.co/LnQeRsHrNT

Now for a quick thread:blog.langchain.dev/benchmarking-q…

github.com/langchain-ai/l…
This started ~2 weeks ago, when I tweeted that we wanted to improve our chains/agents for doing question/answering over CSV data

Why?

Most QA applications focus on text data, but lots of real world data is in CSVs

Read 7 tweets
Aug 3, 2023
💬Conversational Retrieval Agents

The most popular chain in @LangChainAI is the ConversationalRetrievalChain, which allows you chat with your data

Using an agent instead can allow for great flexibility, and its a narrow and well defined enough agent that its fairly reliable

🧵 Image
I'll dive into details in this thread, but quick links:

Blog:

Python Docs: https://t.co/v1wLHIuBki

JS Docs: https://t.co/N0hQ90MFyg https://t.co/1eAdJBUnXCblog.langchain.dev/conversational…
python.langchain.com/docs/use_cases…
js.langchain.com/docs/use_cases…
Image
The basic idea:

Give an agent a tool that is itself a retriever. The agent can then call this tool and get back a list of documents

This allows the agent to decide when it wants to do retrieval - could do it once, twice, or not at all
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(