Harrison Chase Profile picture
Apr 17 11 tweets 5 min read Twitter logo Read on Twitter
🤖Generative Agents🤖

Last week, Park et all released “Generative Agents”, a paper simulating interactions between tens of agents

We gave it a close read, and implemented one of the novel components it introduced: a long-term, reflection-based memory system

🧵 ImageImage
If you haven’t read the paper, you absolutely should

Link: arxiv.org/abs/2304.03442

"We demonstrate through ablation that the components of our agent architecture—observation, planning, and reflection—each contribute critically to the believability of agent behavior"
One of the novel components was a "architecture that makes it possible for generative agents to remember, retrieve, reflect, interact with other agents" - this is what we tried to recreate

Notebook here: python.langchain.com/en/latest/use_…
As shown below, there are a lot of parts. Two notable things:

🪞a reflection step
🎊a retrieval step

The reflections contribute to the agent's memory stream, which is then retrieved and used to act Image
Lets talk about retrieval first. We’ve introduced a lot of different retrievers over the past few weeks, how does the one used in this paper compare?

It can essentially be viewed as a "Time Weighted VectorStore Retriever" - a retriever than combines recency with relevance
As such, we implemented a standalone TimeWeightedVectorStoreRetriever in LangChain

You can see below that you can specify a decay rate to adjust between recency and relevancy

Docs: python.langchain.com/en/latest/modu…

So now that we have this retriever, how is it used in memory? ImageImage
There are two key methods: `add_memory` and `summarize_related_memories`

When an agent makes an observation, it stores the memory:

1. a LLM scores the memory’s importance (1 for mundane, 10 for poignant)
2. Observation and importance are stored within the retrieval system
When an agent responds to an observation:

1. Generates query(s) for retriever, which fetches documents based on salience, recency, and importance.
2. Summarizes the retrieved information
3. Updates the last_accessed_time for the used documents.
So let’s now see this in action! We can simulate what happens by feeding observations to the agent and seeing how the summary of the agent is updated over time

Here we do a simple update of only a few observations ImageImage
We can take this to the extreme even more and update with ~20 or so observations (a full day’s worth)

We can then “interview” the agent before and after the day - notice the change in the agent’s responses! ImageImage
Finally, we can create a simulation of two agents talking to each other.

This is a far cry from the ~20 or so agents the paper simulated, but it's still interesting to see the conversation + interview them before and after ImageImageImage

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Harrison Chase

Harrison Chase Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @hwchase17

Apr 12
How to best build an agent that has access to ALL the ChatGPT Plugins?

IMO, by combining two techniques we recently introduced in @LangChainAI

🔧 Tool Retrieval
🗣️ Natural Language APIs

Explanation and example notebook in 🧵 ImageImage
What are some challenges/issues that arise if you try to build an agent that has access to all the ChatGPT plugins?

First, there will (hopefully) be MANY plugins - too many to fit in context

Second, the LLM has to know how to properly use the underlying OpenAPI spec
🔧 Tool Retrieval

In order to combat the issue of too many plugins, we can do a retrieval step to select the relevant plugins for a given query

This is an idea we recently shared, made easy by the recent CustomAgent refactor
Read 6 tweets
Mar 16
Want to give your agent access to 20k+ tools?

🔥@LangChainAI x @zapier🔥

Integration now out in Python and JS

Blog Post: blog.langchain.dev/langchain-zapi…

Python Docs: langchain.readthedocs.io/en/latest/modu…

JS Docs: hwchase17.github.io/langchainjs/do… Image
Zapier recently released their Natural Language Actions API

zapier.com/l/natural-lang…

This API does translation from natural language -> API call -> LLM friendly output

You can set up an NLA API endpoint for each of @zapier's 5k+ apps and 20k+ actions
Once you set them up, you can easily call and combine them from LangChain, either in agents or in chains (see above docs)

This integration was so easy because the interface to these tools is just... text. Your agent doesn't have to worry the correct format at all
Read 4 tweets
Mar 14
✨Evaluation✨

With so many new models to choose from, evaluating your chains & agents is more important than ever

Excited to announce some enhancements to our evaluation toolkit at @LangChainAI, including a @huggingface Community space for users to share datasets

🧵
We've seen two issues when people try to evaluate chains & agents:

- Lack of datasets
- Lack of good metrics

Here's how we're trying to tackle both:
Lack of datasets

We started LangChainDatasets - a @huggingface community for datasets that can be used to evaluate chains and agents

huggingface.co/LangChainDatas…

We've also include example notebooks for easily loading and running these examples

langchain.readthedocs.io/en/latest/use_…
Read 7 tweets
Mar 13
With projects like `kor` and `guardrails`, we've seen a massive increase in interest in structuring and validating LLM responses

We want to make it easy for people to leverage these libraries (and other techniques) with @LangChainAI

Here's how we plan to do so:

🧵
It starts with an existing abstraction:

✨OutputParsers✨

Define a schema an LLM response should abide by, and then parse the response into that schema

Python: langchain.readthedocs.io/en/latest/modu…

TS/JS: hwchase17.github.io/langchainjs/do…

See examples in pics attached! ImageImageImageImage
We purposefully built this as a modular unit, so that it's useful by itself

But we also want to improve how these OutputParsers are integrated into chains and agents

For that, we can look at @JohnRChurch's PR: github.com/hwchase17/lang…
Read 6 tweets
Feb 28
Prompt Templates are a key building block, so I'm excited to announce a small but powerful update to them in @LangChainAI:

🥳Partial Prompts🥳

Can partial with values OR functions, which means this can be used to easily inject up-to-date information in prompts

🧵
The first use case is pretty self explanatory:

Imagine you've got a prompt template which takes as input multiple variables

Your pipeline gets some variables early on, and others later on

Rather than waiting for all variables to arrive before formatting, you can now partial!
The next use case is more exciting

It's somewhat common practice to include something like "The date is {current_date}" at the start of the prompt, to hint to the LLM what it should and shouldn't know

That date is not static, but it's also annoying to have as an input variable
Read 6 tweets
Feb 26
✨Agents + Vectorstores✨: a powerful combo

Can be used to:
🍴 route questions between MULTIPLE indexes
⛓️ do chain-of-thought reasoning with proprietary indexes
🔧 combine proprietary data with tool usage

Here's how to use them together in @LangChainAI 👇
Step one is creating a query interface over your indexes

LangChain provides easy tools for ingesting, splitting, indexing, and then querying data with a VectorDBQAChain

Python Guide: langchain.readthedocs.io/en/latest/modu…

JavaScript Guide: hwchase17.github.io/langchainjs/do…
Using this VectorDBQAChain *as a tool itself* in an agentic framework can unlock a lot more powerful cases

Here are guides for using a chain as a tool:

Python Guide: langchain.readthedocs.io/en/latest/modu…

JavaScript Guide: hwchase17.github.io/langchainjs/do…
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(