ℏεsam Profile picture
AI Engineer • giving birth to agents @CamelAIOrg
4 subscribers
Sep 12, 2025 7 tweets 3 min read
Anthropic just dropped a full masterclass on building tools for your agents, here's the gist:
> evaluate your tools religiously
> limit the number of tools
> namespace your tools
> return meaningful context from tools
> prompt-engineer your tool descriptions
what each means: Image 1. evaluate your tools
use agents to create a test set of real-world tasks. then evaluate your tool on this benchmark. refine your tool description and args. create a hold-out test set and evaluate on that too. measure your tool performance and make sure it works. Image
May 23, 2025 5 tweets 2 min read
large language model explained through 4 simple notes:

1. a little history and traditional methods. Image 2. vector embeddings and RNNs. Image
Apr 5, 2025 11 tweets 4 min read
the best researchers from Meta, Yale, Stanford, Google DeepMind, and Microsoft laid out all we know about Agents in a 264-page paper [book],

here are some of their key findings: Image they build a mapping of different agent components, such as perception, memory, and world modelling, to different regions of the human brain and compare them:

- brain is much more energy-efficient
- no genuine experience in agents
- brain learns continuously, agent is static Image
Jan 29, 2025 8 tweets 3 min read
🧵SFT memorizes and RL generalizes,
based on OpenAI o1 and DeepSeek R1 we know that RL helps the models with reasoning, but this paper (dropped today) explores:
> how does SFT or RL affect the model’s generalization to different rules?
> Is SFT necessary for RL training? Image In short, the paper argues that supervised fine-tuning (SFT) helps the model memorize and align with certain outputs, while reinforcement learning (RL) helps the model generalize and learn out-of-distribution (OOD) tasks. Image