If you develop seriously with LLMs and are building complex agentic flows, you don't need convincing about this.
I've built the most comprehensive, up-to-date course on prompting LLMs, including reasoning LLMs.
4 hours of content! All Python!
Check it out if you're building AI Agents or RAG systems -- prompting tips, emerging use cases, advanced prompting techniques, enhancing LLM reliability, and much more.
All code examples use pure Python and the OpenAI SDKs. That's it!
This course is for devs and AI engineers looking for a proper overview of LLM design patterns and prompting best practices.
We offer support, a forum, and live office hours too.
DM me for discount options. Students & teams also get special discounts.
Can you cut the fine-tuning costs of an LLM by 75% and keep strong reasoning performance?
A new paper from the Tencent AI Lab claims that it might just be possible.
Let's find out how:
The First Few Tokens
It shows that all you need is a tiny prefix to improve your model’s reasoning—no labels or massive datasets are required!
Uses an unsupervised prefix fine-tuning method (UPFT)—only requiring prefix substrings (as few as 8 tokens) of generated solutions.
Task template for Prefix Tuning
They use a simple task template for prefix tuning. By using a few leading tokens of the solution, the model learns a consistent starting approach without requiring complete, correct final answers. Other approaches require entire reasoning traces.
It's a multi-agent AI system built with Gemini 2.0 to help accelerate scientific breakthroughs.
2025 is truly the year of multi-agents!
Let's break it down:
What's the goal of this AI co-scientist?
It can serve as a "virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries."
How is it built?
It uses a coalition of specialized agents inspired by the scientific method.