My team at @ServiceNowRSRCH is releasing TapeAgents: a holistic framework for agent development and optimization. At its core is the tape: a structured agent log.
When you build an agent, you want components but also fine-grained control, you want step-by-step debugging. When you serve, you want resumable sessions and streaming. When you optimize, you want structured logs, agent configs and finetuning support.
Tapes give you all of that!
A tape is a granular, structured log of the agent session. Everything goes through the tape in TapeAgents β¬οΈ
Agents read the tape, reason, and write to the tape. The environment executes the actions from the tape and writes observations to the tape. Apps use the tape as session states. Algorithms use tapes to update agent prompts. Agents also produce finetuning data from tapes.
Start your TapeAgents journey with our examples:
- Intro notebook
- QA agent for GAIA
- Web agent for WorkArena
- AutoGen-style data science agent team
- DSPy-style prompt tuning
- Two agent distillation examples
Also don't miss our tooling (see image).
More examples coming!
We know you've heard of many other great frameworks. How's TapeAgents different?
We've compared TapeAgents to LangGraph, DSPy and AutoGen (see below). TapeAgents is unique in targeting both the needs of development and data-driven agent optimization.
The nicest thing about TapeAgents is that we got rid of the obscure state of the agentic system, that logs give limited insight into. We made the log the state! Every step in the log is signed by the agent's component that made it. This is perfect for auditing and debugging.
TapeAgents's is still an experimental framework. We release TapeAgents to share our ideas and solicit your feedback. Please contact @DBahdanau , @ollmer or @JordanPrinceT with any questions or suggestions.
Last but not least: the tech report describes in detail how we trained a pleasant cost-efficient form-filling assistant on synthetic data. Results speak for themselves β¬οΈ
We hope you will also use TapeAgents to build effective solutions with small models that uses less watts!
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
The result is nice, the benchmark will be useful, some ideas are novel. But human level is still light years away.
1/n
The system ranks behind 54.3% participants. Note that many participants are high-school or college students who are just honing their problem-solving skills. Most people reading this could easily train to outperform #AlphaCode, especially if time pressure is removed...
Limited time (e.g. 3 hours to solve 6 problems) is a key difficulty in comp. programming. The baseline human is very constrained in this model-vs-human comparison. For #AlphaCode the pretraining data, the fine-tuning data, the model size, the sampling - all was nearly maxed out.