Harrison Chase Profile picture
Aug 3 7 tweets 3 min read Twitter logo Read on Twitter
💬Conversational Retrieval Agents

The most popular chain in @LangChainAI is the ConversationalRetrievalChain, which allows you chat with your data

Using an agent instead can allow for great flexibility, and its a narrow and well defined enough agent that its fairly reliable

🧵 Image
I'll dive into details in this thread, but quick links:

Blog:

Python Docs: https://t.co/v1wLHIuBki

JS Docs: https://t.co/N0hQ90MFyg https://t.co/1eAdJBUnXCblog.langchain.dev/conversational…
python.langchain.com/docs/use_cases…
js.langchain.com/docs/use_cases…
Image
The basic idea:

Give an agent a tool that is itself a retriever. The agent can then call this tool and get back a list of documents

This allows the agent to decide when it wants to do retrieval - could do it once, twice, or not at all
We've also added a new type of memory

This type of memory allows for using the results of previous tool invocations in future agent interactions

This means if you ask a follow up question, you don't have to do retrieval all over again
The benefits over normal retrieval include:

- You don't do retrieval if the question isn't about the topic
- You don't do retrieval if the question is about previously mentioned topics
- You can do multiple retrieval steps
- You can retrieve from multiple sources
The downsides include:

- This type of memory takes up more space
- Sometimes the agent doesn't realize it needs to do retrieval

As models get better, and as context windows get longer, we expect these downsides to matter less!
We've included a really simple example of this setup in an example repo



Uses @streamlit for the UI, and LangSmith for feedback collection and monitoringgithub.com/hwchase17/conv…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Harrison Chase

Harrison Chase Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @hwchase17

Aug 1
A 🧵on examples of using our new LangChain Expression Language to rewrite some of our most popular chains

Benefits: it's very clear what's going on under the hood, and (most importantly) how to modify them

👇 Image
Before jumping in:

(1) We'll be doing a webinar on this tmrw, so come join then for a more in depth walkthrough + Q/A:

(2) There's lots of more chains to rewrite, so if you have good examples (or asks) just comment and I'll add!crowdcast.io/c/ckw1tydg29er
Prompt Template + LLM

Starting with the most basic, a simple combination of a prompt template + a call to an LLM

`prompt | llm` Image
Read 15 tweets
Jul 6
💬ConversationalRetrievalChain Upgrades

One of our more popular chains is the ConversationalRetrievalChain, which allows you to create a retrieval augmented generation chatbot

We've introduced some small but impactful quality of life changes:

🧵
📃Improved Reference Docs

We beefed up our reference documentation to include better docstrings and a more end-to-end example

There's a lot of toggles to play with, hopefully this helps make it more clear what all the parameters are

Docs: api.python.langchain.com/en/latest/chai…
❓Rephrase Question Flag

The conversational retrieval chain first condenses the chat history and the new message into a standalone question to use for retrieval

This flag controls whether that new question is also used for generation as well

Docs: api.python.langchain.com/en/latest/chai…
Read 5 tweets
Jul 5
📄Documents x LLMs📄

Combining documents with LLMs is a key part of retrieval and chaining

We've improved our @LangChainAI reference documentation across the 5 major CombineDocumentsChains and helper functions to help with clarity and understanding of how these work

🧵





📄 `format_document`

Want to control which metadata keys show up in the prompt?

This helper function is rarely exposed, but is key to combining documents with LLMs

It takes a Document and formats it into a string using a PromptTemplate

Docs: https://t.co/Xrl5HtvFlvapi.python.langchain.com/en/latest/sche…
🧸Stuff Documents Chain

The most basic CombineDocumentsChain, this takes N documents, formats them into a string using a PromptTemplate and `format_document`, and then combines them into a single prompt and passes them to an LLM

Docs: https://t.co/NCvUNEbAVYapi.python.langchain.com/en/latest/chai…
Read 8 tweets
Jun 19
⭐️Using `functions` to structure output⭐️

We're starting to add more chains that rely on functions to structure output

Here's a quick overview of how we're doing that, which chains we've added so far, how to contribute, and additional resources

🧵
Although we first incorporated `functions` into agents, an almost more important ability of `functions` is to structure output from ChatGPT

This is extremely useful when you want to use the output of ChatGPT in a particular way
You can do this by not only passing in `functions` parameter, but also passing in the `function_call` parameter

The `function_call` parameter forces it to respond using a particular function - allowing you to guarantee the output in a specific format
Read 9 tweets
Jun 16
The new @OpenAI functions are good for other things besides agents

Another killer use case is extracting structured information from unstructured docs

We've adding support for extraction AND tagging in @LangChainAI - thanks to @fpingham for code and @jxnlco for review

🧵
✂️Extraction

Specify a schema - either a dictionary or a Pydantic model - and then extract entities from a piece of text with the same schema

This will return a list of objects with that schema

Docs: python.langchain.com/en/latest/modu… ImageImage
⚡️Tagging

Specify a schema and tag a document with those attributes

As opposed to Extraction, this extracts only one instance of that schema so its more useful for classification of attributes pertaining to the text as a whole

Docs: python.langchain.com/en/latest/modu… ImageImage
Read 4 tweets
Jun 5
⭐️Composable Prompts⭐️

Wouldn't it be nice if there was a way to compose prompts together, reusing pieces across prompts?

In the newest Python and JS release there now is with `Pipeline Prompt`!

Links 👇 ImageImage
The way this works is you define a `PipelinePrompt` with two components:

- FinalPrompt: the final prompt template to be formatted
- PipelinePrompts: a sequence of tuples of (name, PromptTemplate)

The `name` argument is how the formatted prompt will be passed to future prompts
When `.format` is called, the PipelinePrompts are first formatted in order, and are then used in future formatting steps with their respective `name` arguments

Finally, the FinalPrompt.format is called using any previously formatted values as neccesary
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(