LLM agents excel at tasks generally requiring human intuition to solve, but they can't yet solve arbitrarily complex multi-step tasks. If the task can be solved in multiple parts, you should decompose it as a workflow of multiple agents.
Strategy #2: Curate the Toolset
LLM agents repeatedly call tools until they reach their goal, so curating the toolset is crucial.
The toolset should be as powerful, focused, and helpful as possible. Put up guardrails to prevent your agents from reaching known dead-ends!
Strategy #3: Structure Complex Outputs
Make sure your agent knows exactly what it needs to output, including the precise format of that output. Pro tip: you can ask them to output information which you don't plan to use, but that steers them towards certain ways of thinking!
Strategy #4: Adapt to the Models
Some models excel at precise instruction following; others need more flexibility to achieve a high-level goal.
Also, some models struggle with tool-calling, but you can explore custom tool call formats or (ab)use the `tool_choice` API parameter
This was just a quick summary! For many more details, including specific examples of each strategy in our CRS, check out our blog post:
@theori_io's AIxCC CRS has already found dozens of 0day vulnerabilities, and we've barely scratched the surface! The best part: it's open source, so there's no secrets to hide (at least in the AIxCC version 😉)!
So, how does our CRS actually find these 0days? 🧵
We start by passing every function in the source code into LLMs, asking them to consider a wide-range of vulnerability classes and explicitly accept/reject each class. We also run off-the-shelf static analyzers.
Combined, we end up with 10k+ candidate vulns for each project.
Of course, most of these candidates are actually benign, so running our full suite of LLM agents on each report would be wasteful.
Instead, we developed techniques to filter out false-positives and cheaply narrow in on the most likely candidate vulns.