LlamaIndex 🦙 Profile picture
Build LLM agents over your data Github: https://t.co/HC19j7vMwc Docs: https://t.co/QInqg2zksh Discord: https://t.co/3ktq3zzYII
6 subscribers
Oct 26, 2023 5 tweets 3 min read
🚨 Completely Revamped Docs 🚨

We’ve completely re-orged our docs to better mirror the user journey from building prototype to production LLM/RAG apps with LlamaIndex

200+ guides to build/optimize your app.

Full credits @seldo, see thread below! 🧵

docs.llamaindex.ai



Image
Image
Image
Image
Section 1: Use Cases

The key use cases for building LLM apps over your data consist of question-answering, conversational chat, workflow automation with agents, and structured data extraction.

Learn about these use cases at a high-level before diving into materials. Image
Sep 25, 2023 4 tweets 3 min read
We’re excited to release full native support for THREE @huggingface embedding models (s/o @LoganMarkewich):
🧱 Base @huggingface embeddings wrapper
🧑‍🏫 Instructor embeddings
⚡️ Optimum embeddings (ONNX format)

Full thread below 🧵.

Checkout the guide: gpt-index.readthedocs.io/en/latest/exam…


Image
Image
Image
[1] Base @huggingface embeddings 🧱

This is a generic wrapper around any HF model for embeddings. You can set either pooling="cls" or pooling="mean”.

Check out the embeddings leaderboard for recs on embedding models to use! huggingface.co/spaces/mteb/le…
Image
Aug 28, 2023 4 tweets 2 min read
We recently added 3 finetuning projects 🔥
✅ Finetuning embeddings
✅ @OpenAI finetuning gpt-3.5-turbo to distill GPT-4
✅ Finetuning Llama 2 for text-to-SQL

We now have a brand-new guide ✨showing how to include all these components when building RAG:

gpt-index.readthedocs.io/en/latest/end_…
Image Finetuning embeddings: github.com/run-llama/fine…
Aug 26, 2023 5 tweets 2 min read
We now have the most comprehensive cookbook on building LLMs with Knowledge Graphs (credits @wey_gu).
✅ Key query techniques: text2cypher, graph RAG
✅ Automated KG construction
✅ vector db RAG vs. KG RAG

Check out the full 1.5 hour tutorial:
Image The full Colab notebook is here:

There was so much content beyond the live webinar that we recorded a part 2 🔥

We stitched it together in the video.colab.research.google.com/drive/1tLjOg2Z…
Aug 10, 2023 5 tweets 3 min read
Introducing “One-click Observability” 🔭

With one line of code, you can now seamlessly integrate @llama_index with rich observability/eval tools offered by our partners (@weights_biases, @arizeai, @truera_ai).

Easily debug/eval your LLM app for prod 💪 https://t.co/tia41IgsT6gpt-index.readthedocs.io/en/latest/end_…
[1] @weights_biases Prompts lets users log/trace/inspect the LlamaIndex execution flow during index construction/querying.

You automatically get traces, and can also choose to version/load indices.

https://t.co/iGDkmxybzggpt-index.readthedocs.io/en/latest/end_…

Image
Image
Aug 8, 2023 4 tweets 2 min read
Tip for better RAG systems💡: don’t just store raw text chunks, augment them with structured data.
✅Enables metadata filtering
✅Helps bias embeddings

Here’s a guide on how to use the @huggingface span-marker to extract entities for this exact purpose📕: https://t.co/Gwwoeu3i9Hgpt-index.readthedocs.io/en/latest/exam…
Image In this example, we parse the 2023 IPPC Climate Report.

After text parsing to break the document into chunks, we use the span-marker extractor to extract relevant entities. Image
Aug 7, 2023 4 tweets 3 min read
Routing 🚏 is a super simple concept that takes advantage of LLM decision making. Use it in:
⚡️Workflow automation
🔎 Search/retrieval over complex data

We’ve significantly upgraded our router (0.7.20) for retrieval/querying AND added a full guide 📗: https://t.co/Vn3DnCRjpNgpt-index.readthedocs.io/en/latest/core…
Image Example 1: Using routing to decide between summarization or semantic search.

Given different query engines that will act on different ways over your data, a router module can help decide which one to pick given a question: https://t.co/Z5ZfGODTpKgpt-index.readthedocs.io/en/latest/exam…
Image
Jul 17, 2023 5 tweets 1 min read
Do you want more reliable LLM agents? Designing a good Tool API is a *crucial* ingredient 💡

Good API design is an important skill in any dev’s toolkit, but it’s especially important for AI/LLM engineers.

We’ve curated some Tool API best practices ✨🧵 medium.com/llamaindex-blo… Context:

LLM-powered agents now can now “theoretically” interact with arbitrary external services, but there’s a sentiment that a lot of agent implementations can be unreliable:
- might not properly reason about next steps
- Might get stuck in a reasoning loop.
Jul 14, 2023 7 tweets 3 min read
You can give an LLM-powered data agent access to ALL of Google 🔎, Gmail 📥, GCal 📆 with LlamaIndex + LlamaHub.

Easily build a personalized assistant!

Here’s how you can build one to not only find a dentist, but also easily schedule a dentist appt 🦷👇

https://t.co/hKtHAM4SIVcolab.research.google.com/drive/1Br-QPwA…
Step 1: Our Gmail, Google Calendar, and Google Search Tool Specs offer rich API interfaces for agents.

For instance, the Gmail Tool spec allows you to create a draft, update it, and send it.

In total, we give all 9 Tools to the agent to use. Image
Jul 13, 2023 4 tweets 2 min read
Stop building API connectors - build data agents that can automatically access to ANY API defined with an OpenAPI spec 🛠️

Use LlamaIndex data agents + indexes to store/retrieve API specs, and use them to call web services! 🌐

Full Colab notebook: https://t.co/MVeLXuBeqRcolab.research.google.com/drive/18aCO8CK…
In the notebook example, we initialize our OpenAPI tool which defines endpoints to load OpenAPI specs, and also a request tool that can make API requests.

Problem: the data returned by OpenAPI specs is too large ⚠️
Jun 28, 2023 4 tweets 2 min read
A Guide to LLM Structured Outputs 📗

LLMs can be incredibly powerful tools for extracting/outputting structured data.

We thought deeply about the abstractions and are excited to release a full guide on using this in LlamaIndex: https://t.co/ZEI3ae6fqtgpt-index.readthedocs.io/en/latest/how_…
Our core abstractions (Pydantic Programs) work with both the text completion as well as function calling endpoints.

Text completion require an output parser to output structured data. Fn calling (OpenAI, Microsoft guidance) extracts structured outputs out of the box.
Jun 27, 2023 5 tweets 2 min read
We’ve shipped a HUGE upgrade to your ability to represent/customize metadata within a document 🛠️

End result: you can more precisely control/augment your data. Can dramatically boost LLM + retrieval performance! 🔥💪

We tried our best to minimize breaking changes 🦙👇 First, the basics.

The `extra_info` and `node_info` on the Document object are deprecated.

They are now replaced with a unified `metadata` dictionary!
Jun 20, 2023 6 tweets 3 min read
We’ve added a lot of integrations with the @OpenAI function API and supporting guides:
🪆 Pydantic object extraction (w/ nesting)
🤖 Agents
🛠️ Query planning Tool
🧑‍🍳 Data Analysis Cookbook + 4 other guides

It’s easy to get lost amidst the updates; here’s a tour 🚞 Our OpenAIPydanticProgram creates an interface where you pass in an LLM text prompt and get back a Pydantic object.

These Pydantic objects can be nested or even recursive:

gpt-index.readthedocs.io/en/latest/exam…

Inspired by @jxnlco’s work
Jun 17, 2023 4 tweets 2 min read
Urgent Friday release (v0.6.27) 📣⚠️

The OpenAI function API makes it incredibly easy to build agents, but there’s one caveat: what if the number of functions is too large?

Solution: do fn retrieval with LlamaIndex first! Try it out in beta:

github.com/jerryjliu/llam… ImageImage Our new RetrieverOpenAIAgent implementation allows you to index all your candidate functions first (say using our vector index), and then perform retrieval on function signatures during query-time.

That way the final OpenAI call is concise - only retrieves candidate fns.
Jun 16, 2023 4 tweets 2 min read
One way to augment top-k embedding lookup is with Maximal Marginal Relevance (MMR): reduce redundancy in retrieved results, and increase diversity.

S/o to @BrouilletJeremy for adding to LlamaIndex! 👇

⚠️ BUT: This requires careful tuning ⚠️ Image The MMR algorithm looks like the following (left screenshot).

It wants to increase similarity between candidate document with query and decrease similarity with previous documents, depending on a specified threshold between 0 and 1. Image
Jun 14, 2023 4 tweets 3 min read
In the span of 12 hrs, we shipped some major features using the @OpenAI function API 🚀🚀

🤖 Brand-new `OpenAIAgent` on our query tools
🧱 structured data extraction module
⚡️ Tutorial: build your *own* agent in 50 lines of code
🔎 Tutorial: use agent on our query tools Our simple `OpenAIAgent` gives you a powerful agent capable of sequential tool use + callbacks/async.

props to @OpenAI function API (and the new @LangChainAI abstractions), it was super easy to spin up an agent interface under the hood.

Implementation: github.com/jerryjliu/llam…
Jun 12, 2023 4 tweets 3 min read
Parsing LLM outputs into structured formats is critical.

But feeding in a “suggestion” into the input prompt doesn’t guarantee structure 🤔

In contrast, @Microsoft Guidance allows you to “force” a schema! 💡

You can now easily use with LlamaIndex: 📗

gpt-index.readthedocs.io/en/latest/how_… In most LLM APIs, you send an entire input prompt, get back an entire output prompt.

Guidance uses a lower-level API at the token level, allowing you to interleave generation and prompting.

This allows you to directly “prompt” the JSON keys and leave spaces for generation. Image
Jun 10, 2023 5 tweets 5 min read
We want to give a quick shoutout to some projects built on top of @llama_index during MumbaiHacks (@mumbai_tech_) last weekend - including the winner of our bounty prize 🏅

Thanks to the organizers! A short thread below 🧵 [1] Congrats to Llama Readers (@dhruv___anand, @JayeshRathi19, @prateekkthakur) for winning the LlamaIndex Bounty!

The app solves the problem of forgetting plots/characters when reading a book; it gives personalized plot+character summaries!

dropbox.com/s/6e7e16uyptix…
Jun 8, 2023 6 tweets 4 min read
We’ve shipped a crazy number of features in the past two days ⚡️👇

🧠 *three* new vector stores (@supabase, Tair, DocArray)
🔌 @weights_biases tracing
🕸️ Graph storage support (@wey_gu)
🔎JSONPath query engine (@thesourabhd)
📄 Docs improvements (@disiok)
+ more! [1] @supabase integration: supabase.com/docs/guides/ai…

Tair (@AlibabaGroup) integration: gpt-index.readthedocs.io/en/latest/exam…

In-memory stores using @docarray:
gpt-index.readthedocs.io/en/latest/exam…

gpt-index.readthedocs.io/en/latest/exam…
May 24, 2023 4 tweets 3 min read
Sometimes we have so many updates it’s hard to make sure our Twitter is up to date 😅

We have:
- 8 data loaders added to LlamaHub in the past week 📈🏡
- New blog posts 📝
- Recorded video from @MongoDB fireside chat! 🎥
- LlamaIndex 0.6.10 🦙 8 LlamaHub data loaders ❗️🏡

🐤Twitter snscrape (smyja)
🌤️OpenWeathermap (@iamadhee_)
📽️@Kaltura (@zohar)
🗃️Azure blob storage loader (Rivaaj)
monday.com loader
🕸️GraphQL data loader (@mesirii)

+ @docugami + BoardDocs which we’ve featured
May 19, 2023 4 tweets 3 min read
We’ve got too many updates to fit into one Tweet (0.6.9) ⭐️🔥
🗃️ Fsspec: persist data to s3/gcs/Azure (@hingeloss)
🗳️S3 KV Store (Sourabh)
🧺Accumulator response builder (Colin)
🪜Sub-question Query Engine (@disiok)
📄@docgami data loader (@tjaffri)

Will do followup highlights! Fsspec: You can now persist objects in our vector store/docstore to ANY file system supported by fsspec! This includes AWS S3, GCS, Azure blob store, and much more.

Huge s/o to @hingeloss.

PR: github.com/jerryjliu/llam… Image