The core abstractions:
✅ Task managers - add/generate/parse/prioritize tasks using @gpt_index modules
🤖 Execution Agents - execute actions with the help of @LangChainAI agent abstractions
🏃AGI Runner - call agents/task managers in an outer loop!
Docs coming soon!
The goal of llama-agi (part of our Llama Lab repo), is to provide a fun playground for building experimental AGI capabilities.
Can also help to inform core features we put in llama-index.
LlamaIndex 0.5.25 brings some awesome new integrations:
🗳️ MyScale data loader + Vector Store 👇
🏞️ BLIP/BLIP2 image captioning data loaders 👇
📘Jupyter ipynb data loaders
🤗 @huggingface FS data loader
⚙️Evaporate structured data extractor 👇
See details below!
MyScale is an AI database that can manage both structured and vectorized data.
LlamaIndex 0.4.25 🦙: some big features from the community 🥳
- ⚙️ An *optimizer* to help reduce your LLM token usage by up to 50% or more! (s/o @shi_hongyi) more details below! 👇
- ♻️ You can now *sync* Document updates with an index! (s/o Logan), see below 👇
(1) To optimize LLM token usage: specify a `SentenceEmbeddingOptimizer` when querying a vector index.
We “strip-out” sentences from a text chunk that have low embedding similarity, before the LLM call.
See 🖼️ below - we cut LLM token usage by 50%
(2) You can now sync updates from your source Documents to an index - we’ll only update the index for Documents that have changed.
LlamaIndex 0.4.15:
- Biggest update is that we’ve fixed the composability over vector stores! See below for a demonstration of the new UX flow. 👇
- Added @weaviate_io multi-threaded batch importing to decrease load time (thanks @MohdShukriHasa1 for suggestion!)
A big pain point with the composability UX was that it didn’t work fully work over external vector stores.
Why? Because configs were keyed by type, not on ID.
So if you have two Pinecone/Weaviate stores, you couldn’t distinguish them during query-time!
Now that’s fixed.
The biggest change here is to set an ID in the query config when defining a graph; this ties the config to a specific index!
You can set an id for each index through `set_doc_id`.
Today, we’re kicking off a rebrand of @gpt_index to : 🎉🦙 LlamaIndex 🦙🎉
🦙 LlamaIndex + 🦙 LlamaHub aim to provide the central interface between LLM’s and your data.
This will be a *gradual* process. We’re starting this off by uploading a `llama-index` pip package you can now use (don’t worry, `pip install gpt-index` is still there!)
llama-index is an exact duplicate of gpt-index (just `import llama_index` instead of `import gpt_index`)
README’s are not updated yet (sorry about that!), but package is here: pypi.org/project/llama-….
Over the course of the next few weeks, we’ll start to update the docs/readmes and classes (while still supporting backwards compat).