We now have a (slightly more sophisticated) in-house `OpenAIAgent` implementation🔥:
- More seamless integrations with LlamaIndex chat engine/query engine
- Supports multiple/sequential function calls
- Async endpoints
- Callbacks/tracing
We used @LangChainAI for the latest LLM abstraction (big s/o for the speed), and some initial memory modules.
The big takeaway here is that it’s easier than ever to build your own agent loop.
Can unlock a LOT of value on the query tools that LlamaIndex provides 🦙
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We look at 3 Uber SEC 10-Q filings in the year 2022: March, June, September.
ChatGPT with the ReAct loop gives unpredictable answers - given “Analyze Uber revenue growth over the last few quarters”, it only looks in September.
In contrast, the OpenAI Function agent is able to sequentially call September, June, and March documents to retrieve information, and then synthesize information.
The user’s *only* has to call the function API in a loop!
By default, if you specify a grammatically incorrect prompt “Draw me a picture a mountain.” the agent will directly call the text-to-image tool with the prompt.
The generated image is suboptimal.
If the HF agent calls our prompt assistant tool instead, we will look up relevant DiffusionDB prompts from our vector index, and use that to rewrite the original prompt!
“A majestic mountain peak, surrounded by lush greenery, etc.”
I’m super excited to make it official: @disiok and I have started a company around @llama_index, and we’ve raised a $8.5M seed round led by @GreylockVC! 🔥🚀
We are building the open-source data framework to unlock LLM capabilities on your private data.
Being able to search/retrieve is an essential component of an agent toolkit.
If you already know the knowledge source beforehand, you can 1) index the data, 2) dump to vector db, 3) make that an agent tool.
But sometimes you may just want the agent to query data “on the fly” ✈️
Our brand-new `OnDemandLoaderTool` does the following steps:
1.💾Load data using any data loader (e.g. from LlamaHub or our core repo)
2.🗂️Index that data “on the fly”
3.🔎Query the index using natural language
4.💬Return the response