Jerry Liu Profile picture
Jun 14 6 tweets 3 min read Twitter logo Read on Twitter
The new OpenAI Function API simplifies agent development by A LOT.

Our latest @llama_index release 🔥shows this:
- Build-an-agent tutorial in ~50 lines of code! ⚡️
- In-house agent on our query tools

Replace ReAct with a simple for-loop 💡👇

github.com/jerryjliu/llam… Image
The OpenAI Function API lets the LLM natively take in message history as input to choose functions 🛠️.

Best of all, it can decide whether to keep picking functions, or output a user message.

It can do this all within the API call, w/o explicit prompting 📝
This is HUGE for a few reasons:
- No more prompt hacking for structured outputs
- No extra API calls/tokens to choose Tools

Also…if the API itself can decide whether to keep going, then…there’s no more need to complex ReAct loops? 🤔❓ (to be determined!)
We’ve landed some HUGE feature changes and tutorials highlighting the power of this function calling API:

Tutorial showing how you can build an agent in 50 lines of code: github.com/jerryjliu/llam…

Tutorial showing agent on top of our query tools: github.com/jerryjliu/llam…
We now have a (slightly more sophisticated) in-house `OpenAIAgent` implementation🔥:
- More seamless integrations with LlamaIndex chat engine/query engine
- Supports multiple/sequential function calls
- Async endpoints
- Callbacks/tracing Image
We used @LangChainAI for the latest LLM abstraction (big s/o for the speed), and some initial memory modules.

The big takeaway here is that it’s easier than ever to build your own agent loop.

Can unlock a LOT of value on the query tools that LlamaIndex provides 🦙

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jerry Liu

Jerry Liu Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jerryjliu0

Jun 15
@OpenAI function agent vs. ReAct agent (w/ prompting) 🥊

We compared the two on a financial analysis task, using ChatGPT.

Surprisingly, the @OpenAI function agent gives on-par (or better?) results, and is easier to implement yourself 💡🧵

Notebook: github.com/jerryjliu/llam… ImageImage
We look at 3 Uber SEC 10-Q filings in the year 2022: March, June, September.

ChatGPT with the ReAct loop gives unpredictable answers - given “Analyze Uber revenue growth over the last few quarters”, it only looks in September. Image
In contrast, the OpenAI Function agent is able to sequentially call September, June, and March documents to retrieve information, and then synthesize information.

The user’s *only* has to call the function API in a loop! Image
Read 6 tweets
Jun 14
Hot take 🔥: the most valuable part of the @OpenAI function calling API is the structured data extraction component.

It's now way easier to enforce a valid JSON output schema, without the need for prompt hacking.

Try it out as an independent module in @llama_index 👇 Image
This module can be used completely independently! Check it out here: github.com/jerryjliu/llam…

It can also be used as part of our advanced query engine modules: github.com/jerryjliu/llam…
This echoes @jxnlco 's tweet that the function API conflates Tool use with structured JSON output - you can (and should) use it for the latter!

Read 4 tweets
Jun 8
We augmented @huggingface Transformers Agents 🤗 with a @llama_index Tool: access to 10k DiffusionDB prompts.🦙

Introducing Text2Img Prompt Assistant - suggest better prompts, generate beautiful images ⚡️🔥

HF Space + Colab + full blog post below 👇

medium.com/llamaindex-blo…
Let’s take a quick look at how it works.

By default, if you specify a grammatically incorrect prompt “Draw me a picture a mountain.” the agent will directly call the text-to-image tool with the prompt.

The generated image is suboptimal. Image
If the HF agent calls our prompt assistant tool instead, we will look up relevant DiffusionDB prompts from our vector index, and use that to rewrite the original prompt!

“A majestic mountain peak, surrounded by lush greenery, etc.”

The image looks much better. Image
Read 4 tweets
Jun 7
We’re stoked to present a @llama_index + @weights_biases integration ahead of the Fully Connected Conference (1 hr)!

Get a full trace 🔍 into your retrieval-augmented generation workflow with LlamaIndex, with the brand-new W&B callback handler 🔌

Guide: docs.wandb.ai/guides/prompts… Image
You can get a full trace tree of various events, like query, retrieval, node parsing, and more.

We also have a full guide in LlamaIndex here: gpt-index.readthedocs.io/en/latest/exam…

You can check out a live demo for yourself right here! wandb.ai/ayut/llamainde…
Full credits go to Ayush on the W&B team, as well as Logan from our team 🙌

To best support this integration, we made some major upgrades to our callback handling within LlamaIndex.

Take a look at our detailed docs here: gpt-index.readthedocs.io/en/latest/how_…
Read 4 tweets
Jun 6
I’m super excited to make it official: @disiok and I have started a company around @llama_index, and we’ve raised a $8.5M seed round led by @GreylockVC! 🔥🚀

We are building the open-source data framework to unlock LLM capabilities on your private data.

medium.com/@jerryjliu98/b…
We’re stoked to be working with @jerrychen (Jerry 🤝 Jerry), @saammotamedi, and @rischter_scale at Greylock.

In addition, we’ve got an incredible team of angels 🎉:
@jaltma
@lennysan
@collinmathilde
@RaquelUrtasun
@profjoeyg
@danshipper
@bentossell

and many more!
Around 7 months ago, I started a project called GPT Index, an initial effort to organize/retrieve information with LLMs.

The question was simple: what if we treated the LLM as a processor and gave it the capability to traverse an external store of data?

Read 9 tweets
Jun 4
So far, our data loaders were intended for human use (ingest data for your LLM app).

What if we could turn all 100+ data loaders from LlamaHub into Tools 🛠️ for an LLM agent 🤖 - to easily load/query data on-demand? 📺

Check out our new release 🔥👇

gpt-index.readthedocs.io/en/latest/exam… ImageImage
Being able to search/retrieve is an essential component of an agent toolkit.

If you already know the knowledge source beforehand, you can 1) index the data, 2) dump to vector db, 3) make that an agent tool.

But sometimes you may just want the agent to query data “on the fly” ✈️
Our brand-new `OnDemandLoaderTool` does the following steps:
1.💾Load data using any data loader (e.g. from LlamaHub or our core repo)
2.🗂️Index that data “on the fly”
3.🔎Query the index using natural language
4.💬Return the response
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(