We asked, you answered — our State of AI Agents Report is here! 🤖✨
We surveyed 1300+ industry professionals, from developers to business leaders, on how they're using AI agents today — and the results are in.
What are the top use cases for agents? The biggest challenges when building agents? And who's finding success after deploying their agents to production?
Here's 5 key insights in the thread below 🧵👇
1⃣ Agent adoption is a coin toss, but nearly everyone has plans for it.
About 50% of respondents have agents in production, with mid-sized companies leading the charge. That number is poised to grow, with 78% planning to implement AI agents soon.
Feb 6 • 4 tweets • 2 min read
⛴️ WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models
WebVoyager is a new kind of web-browsing agent, developed by Hongliang He, @wyu_nd, et. al.
Powered by large multi-modal models, like GPT-4V, it uses browser screenshots to conduct research, analyze images, and perform other tasks.
Older text-based web-browsing agents often fail to handle interactive web elements. Naive vision-based methods can struggle to use tools effectively.
WebVoyager uses “Set-of-mark” prompting to overlay the DOM with labeled bounding boxes and provide better guidance for the agent.
Check out the tutorial on how to build WebVoyager here: 2/ To jump straight to the code, check out the links below.
At a high level, the ingestion pipeline looks like this:
- Use document loaders to scrape the Python docs and API reference
- Chunk
- Using Indexing API to sync latest docs <> vecstore
- Use Github Actions to run ingestion daily
Sep 10, 2023 • 13 tweets • 4 min read
Weekend Reads
our favorites from this week
🧵
Thorough (and fun and well-written) overview of GenAI space by David Kypuros, @bobbyjohnstx, and Jason Nagin at @RedHat
demos, overview of key players, code–it’s got it all!
We’re particularly excited about a centralized hub’s promise to enable:
-Encoding of expertise
-Discoverability of prompts for a variety of models
-Inspectability
-Cross-team collaboration
🧵
Check it out here:
Read more about the motivation and future direction in our blog post here:
Launching today! Easily track analytics on your project over time
👍 feedback
💸 usage (chains, agents, LLMs, tokens)
⏲️ latency
🚨 errors
💬 time to first token
👇
👍/👎 Feedback Charts
Capturing feedback is incredibly important to get a sense of how your application is doing
You can now track this feedback over time, allowing you to have confidence that your users having the best possible interactions with your application
Aug 25, 2023 • 5 tweets • 2 min read
🎙️💬 Fine-tune with LangChain's ChatLoaders 🚀
1/ Want to make ChatGPT respond "in your own voice"? This week, we’ve added ChatLoaders to LangChain, making it easier to fine-tune models to your unique writing style!
2/ ChatLoaders make it easy to load your conversational data from popular platforms as chat messages. Use them for:
- Chat bots that “get” your unique speaking style
- Chatting reliably in a target language
-Customer communication in your brand's voice
Aug 22, 2023 • 5 tweets • 3 min read
New in 🦜🔗 Python:
🌌 @ainetwork_ai agent toolkit
🐻❄️ @DataPolars data loader
🚿 @AzureML online endpoint deployment
🪐 @epsilla_inc vector store
a 🧵:
🌌 @ainetwork_ai agent toolkit
Enable an agent to to transfer AINetwork tokens, read and write values, create apps, and more using the AINetworkToolkit by GH user klae01!
It’s not uncommon to encounter issues with LLM API's. In production, you need to gracefully handle such issues.
We’ve introduced Fallbacks to the LangChain Expression Language (LCEL) to help with just that.
Available in 🦜🔗 Python and JS! a 🧵:
🙅Handling API Errors
A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit rate limits, any number of things. Here’s how we can handle this with fallbacks:
Aug 8, 2023 • 5 tweets • 2 min read
The latest in 🦜🔗:
We've got 4 new model integrations for you!
⏹️ @anyscalecompute chat models
🔢 BAAI general embedding models
🦙 Ollama LLMs
🌌 Nebula LLM by @symbldotai
a 🧵:
⏹️ @anyscalecompute chat models
Anyscale Endpoints is a fast and scalable API to integrate OSS LLMs into your app.
With the new chat model integration by GH user oshuasundance-swca, you can now use it when running models like llama-2!