Building with AI agents @dair_ai • Prev: Meta AI, Galactica LLM, Elastic, PaperswithCode, PhD • I also teach how to leverage and build with LLMs & AI Agents ⬇️
22 subscribers
Mar 13 • 5 tweets • 2 min read
Prompt Engineering is NOT dead!
If you develop seriously with LLMs and are building complex agentic flows, you don't need convincing about this.
I've built the most comprehensive, up-to-date course on prompting LLMs, including reasoning LLMs.
4 hours of content! All Python!
Check it out if you're building AI Agents or RAG systems -- prompting tips, emerging use cases, advanced prompting techniques, enhancing LLM reliability, and much more.
All code examples use pure Python and the OpenAI SDKs. That's it!
Mar 11 • 16 tweets • 6 min read
NEW: OpenAI announces new tools for building agents.
Here is everything you need to know:
OpenAI has already launched two big agent solutions like Deep Research and Operator.
The tools are now coming to the APIs for developers to build their own agents.
Mar 5 • 8 tweets • 3 min read
A Few Tokens Are All You Need
Can you cut the fine-tuning costs of an LLM by 75% and keep strong reasoning performance?
A new paper from the Tencent AI Lab claims that it might just be possible.
Let's find out how:
The First Few Tokens
It shows that all you need is a tiny prefix to improve your model’s reasoning—no labels or massive datasets are required!
Uses an unsupervised prefix fine-tuning method (UPFT)—only requiring prefix substrings (as few as 8 tokens) of generated solutions.
Feb 27 • 7 tweets • 2 min read
Say goodbye to Chain-of-Thought.
Say hello to Chain-of-Draft.
To address the issue of latency in reasoning LLMs, this work introduces Chain-of-Draft (CoD).
Read on for more:
What is it about?
CoD is a new prompting strategy that drastically cuts down verbose intermediate reasoning while preserving strong performance.
Feb 20 • 14 tweets • 5 min read
NEW: Sakana AI introduces The AI CUDA Engineer.
It's an end-to-end agentic system that can produce highly optimized CUDA kernels.
This is wild! They used AI to discover ways to make AI run faster!
Let's break it down:
The Backstory
Sakana AI's mission is to build more advanced and efficient AI using AI.
Their previous work includes The AI Scientist, LLMs that produce more efficient methods to train LLMs, and automation of new AI foundation models.
And now they just launched The AI CUDA Engineer.
Feb 19 • 11 tweets • 4 min read
NEW: Google introduces AI co-scientist.
It's a multi-agent AI system built with Gemini 2.0 to help accelerate scientific breakthroughs.
2025 is truly the year of multi-agents!
Let's break it down:
What's the goal of this AI co-scientist?
It can serve as a "virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries."
Feb 18 • 23 tweets • 7 min read
BREAKING: xAI announces Grok 3
Here is everything you need to know:
Elon mentioned that Grok 3 is an order of magnitude more capable than Grok 2.
Feb 15 • 8 tweets • 2 min read
Introducing... Agent Leaderboard!
Many devs ask me which LLMs work best for AI agents.
The new Agent Leaderboard (by @rungalileo) was built to provide insights and evaluate LLMs on real-world tool-calling tasks—crucial for building AI agents.
Let's go over the results:
1️⃣ Leader
After evaluating 17 leading LLMs across 14 diverse datasets, here are the key findings:
Google's 𝗚𝗲𝗺𝗶𝗻𝗶-𝟮.𝟬-𝗳𝗹𝗮𝘀𝗵 leads with a 0.94 score at a remarkably low cost.
Jan 23 • 16 tweets • 4 min read
OpenAI Introduces Operator & Agents!
Here is everything you need to know:
Operator is a system that can use a web browser to accomplish tasks.
Operator can look at a webpage and interact with it by typing, clicking, and scrolling.
It's available as a research preview. Available in the US for Pro users. Available to Plus users later.
Jan 21 • 4 tweets • 2 min read
Goodbye web scrapers!
Say hello to /extract by @firecrawl_dev
Just write a prompt and get the web data you need!
It doesn’t get any simpler than this.
The /extract endpoint is simple to use. Provide a prompt and a schema and retrieve any data you need from a website.
I’ve added the /* to the URL to find and extract information across the entire website.
The endpoint can return up to thousands of data points at once.
Jan 20 • 4 tweets • 2 min read
The DeepSeek-R1 paper is a gem!
Highly encourage everyone to read it.
It's clear that LLM reasoning capabilities can be learned in different ways.
RL, if applied correctly and at scale, can lead to some really powerful and interesting scaling and emergent properties.
The multi-state training might not make sense initially but they provide clues on optimizations that we can continue to tap into.
Data quality is still very important for enhancing the usability of the LLM.
Unlike other reasoning LLMs, DeepSeek-R1's training recipe and weights are open so we can build on top of it. This opens up exciting research opportunities.
About the attached clip: the previous preview model wasn't able to solve this task. DeepSeek-R1 can solve this and many other tasks that o1 can solve. It's a very good model for coding and math.
When DeepSeek said "on par with OpenAI-o1" I thought they were just hyping. But based on my tests, it's clearly not so.
Wanted to add that DeepSeek-R1 got all of the hard tasks from the OpenAI LLM reasoning blog post correct for me. This is wild and totally unexpected! The only task where it failed (i.e., crossword puzzle) o1 also fails.
Jan 8 • 14 tweets • 4 min read
Agents Overview
Great write-up on Agents by Chip.
Here are my takeaways:
🤖 Agents Overview
An AI agent is made up of both the environment it operates in (e.g., a game, the internet, or computer system) and the set of actions it can perform through its available tools. This dual definition is fundamental to understanding how agents work.
Jan 6 • 13 tweets • 5 min read
Google recently published this great whitepaper on Agents.
2025 is going to be a huge year for AI Agents.
Here's what's included:
- Introduction to AI Agents
- The role of tools in Agents
- Enhancing model performance with targeted learning
- Quick start to Agents with LangChain
- Production applications with Vertex AI Agents
- o1 is launching out of preview in the API
- support for function calling, structured output, and developer messages
- reasoning_effort parameter to tell the model how much effort to spend on thinking
- vision inputs in the API is here too
Visual inputs with developer message (this is a new spin to system message for better steering the model) inside of the OpenAI Playground
Dec 6, 2024 • 8 tweets • 2 min read
Summary of today's OpenAI announcement:
- introduces reinforcement fine-tuning (RFT) of o1
- tune o1 to learn to reason in new ways in custom domains
- RFT is better and more efficient than regular fine-tuning; needs just a few examples
1/n
How it looks in the dev platform. Examples show how to select RFT on o1-mini
Jul 18, 2024 • 7 tweets • 2 min read
That's right! It's a huge week for small language models (SLMs)
JUST IN: Google DeepMind releases Gemma, a series of open models inspired by the same research and tech used for Gemini.
Open models fit various use cases so this is a very smart move from Google.
Great to see that Google recognizes the importance of openness in AI science and technology.
There are 2B (trained on 2T tokens) and 7B (trained on 6T tokens) models including base and instruction tuned versions. Trained on a context length of 8192 tokens.
Commercial use is allowed.
These are not multimodal models but based on the reported experimental results they appear to outperform Llama 2 7B and Mistral 7B.
I am excited about those MATH, HumanEval, GSM8K, and AGIEval results. These are really incredible results for a model this size.
Excited to dive deeper into these models. The model prompting guide is dropping soon. Stay tuned!
Blog:
Google DeepMind just announced Gemini, their largest and most capable AI model.
A short summary of all you need to know:
1) What it is - Built with multimodal support from the ground up. Remarkable multimodal reasoning capabilities across text, images, video, audio, and code. Nano, Pro, and Ultra models are available to support different scenarios such as efficiency/scale and support complex capabilities.
2) Performance - The results on the standard benchmarks (MMLU, HumanEval, Big-Bench-Hard, etc.) show improvement compared to GPT-4 (though not by a lot). Still very impressive!
3) Outperforming human experts - They claim that Gemini is the first model to outperform human experts on MMLU (Massive Multitask Language Understanding), a popular benchmark to test the knowledge and problem-solving abilities of AI models.
4) Capabilities- Gemini surpasses SOTA performance on a bunch of multimodal tasks like infographic understanding and mathematical reasoning in visual contexts. There was a lot of focus on multimodal reasoning capabilities with the ability to analyze documents and uncover knowledge that's hard to discern. The model capabilities reported are multimodality, multilinguality, factuality, summarization, math/science, long-context, reasoning, and more. It's probably one of the most capable models by the looks of it.
5) Trying it out - Apparently, a fine-tuned Gemini Pro is available to use via Bard. Can't wait to experiment with this soon.
6) Availability - Models will be made available for devs on Google AI Studio and Google Cloud Vertex AI by Dec 13th.
blog:
technical report:
Here is the model verifying a student's solution to a physics problem. Huge implications in education. Will be taking a very close look at applications here.
Aug 2, 2023 • 5 tweets • 2 min read
You can now connect Jupyter with LLMs!
It provides an AI chat-based assistant within the Jupyter environment that allows you to generate code, summarize content, create comments, fix errors, etc.
You can even generate entire notebooks using text prompts!
How can you build your own custom ChatGPT-like system on your data?
This is not easy as it could require complex architecture and pipelines.
Given the high demand, I started to explore the ChatLLM feature by @abacusai.
I’m very impressed! Let's take a look at how it works:
Everyone has a knowledge base or data sitting around, like wiki pages, documentation, customer tickets, etc.
With ChatLLM you can quickly create a chat app, like ChatGPT, that helps you discover and answer questions about your data.
Jun 22, 2023 • 7 tweets • 3 min read
MosaicML just released MPT-30B!
The previous model they released was 7B. MPT-30B is an open-source model licensed for commercial use that is more powerful than MPT-7B.
8K context and 2 fine-tuned variants: MPT-30B-Instruct and MPT-30B-Chat.