Let's build a multi-agent content creation system (100% local):
Before we dive in, here's a quick demo of what we're building!
Tech stack:
- @motiadev as the unified backend framework
- @firecrawl_dev to scrape web content
- @ollama to locally serve Deepseek-R1 LLM
The only AI framework you'll ever need to learn! 🚀
Here's the workflow:
- User submits URL to scrape
- Firecrawl scrapes content and converts it to markdown
- Twitter and LinkedIn agents run in parallel to generate content
- Generated content gets scheduled via Typefully
Now, let's dive into code!
Steps are the fundamental building blocks of Motia.
They consist of two main components:
1️⃣ The Config object: It instructs Motia on how to interact with a step.
2️⃣ The handler function: It defines the main logic of a step.
Check this out 👇
With that understanding in mind, let's start building our content creation workflow... 👇
Step 1: Entry point (API)
We start our content generation workflow by defining an API step that takes in a URL from the user via a POST request.
Check this out 👇
Step 2: Web scraping
This step scrapes the article content using Firecrawl and emits the next step in the workflow.
Steps can be connected together in a sequence, where the output of one step becomes the input for another.
Check this out 👇
Step 3: Content generation
The scraped content gets fed to the X and LinkedIn agents that run in parallel and generate curated posts.
We define all our prompting and AI logic in the handler that runs automatically when a step is triggered.
Check this out 👇
Step 4: Scheduling
After the content is generated we draft it in Typefully where we can easily review our social media posts.
Motia also allows us to mix and match different languages within the same workflow providing great flexibility.
Check this typescript code 👇
After defining our steps, we install required dependencies using `npm install` and run the Motia workbench using `npm run dev` commands.
Check this out 👇
Motia workbench provides an interactive UI to help build, monitor and debug our flows.
With one-click you can also deploy it to cloud! 🚀
Check this out 👇
This project is built using Motia!
Motia is a unified system where APIs, background jobs, events, and agents are just plug-and-play steps.
- User submits URL to scrape
- Firecrawl scrapes content and converts to markdown
- Twitter and LinkedIn agents run in parallel to generate content
- Generated content gets scheduled via Typefully
If you found it insightful, reshare with your network.
Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!
dLLM is a Python library that unifies the training & evaluation of diffusion language models.
You can also use it to turn ANY autoregressive LM into a diffusion LM with minimal compute.
100% open-source.
Here's why this matters:
Traditional autoregressive models generate text left-to-right, one token at a time. Diffusion models work differently - they refine the entire sequence iteratively, giving you better control over generation quality and more flexible editing capabilities.
You're in a Research Scientist interview at Google.
Interviewer: We have a base LLM that's terrible at maths. How would you turn it into a maths & reasoning powerhouse?
You: I'll get some problems labeled and fine-tune the model.
Interview over.
Here's what you missed:
When outputs are verifiable, labels become optional.
Maths, code, and logic can be automatically checked and validated.
Let's use this fact to build a reasoning model without manual labelling.
We'll use:
- @UnslothAI for parameter-efficient finetuning.
- @HuggingFace TRL to apply GRPO.
Let's go! 🚀
What is GRPO?
Group Relative Policy Optimization is a reinforcement learning method that fine-tunes LLMs for math and reasoning tasks using deterministic reward functions, eliminating the need for labeled data.
Here's a brief overview of GRPO before we jump into code:
NOBODY wants to send their data to Google or OpenAI.
Yet here we are, shipping proprietary code, customer information, and sensitive business logic to closed-source APIs we don't control.
While everyone's chasing the latest closed-source releases, open-source models are quietly becoming the practical choice for many production systems.
Here's what everyone is missing:
Open-source models are catching up fast, and they bring something the big labs can't: privacy, speed, and control.
I built a playground to test this myself. Used CometML's Opik to evaluate models on real code generation tasks - testing correctness, readability, and best practices against actual GitHub repos.
Here's what surprised me:
OSS models like MiniMax-M2, Kimi k2 performed on par with the likes of Gemini 3 and Claude Sonnet 4.5 on most tasks.
But practically MiniMax-M2 turns out to be a winner as it's twice as fast and 12x cheaper when you compare it to models like Sonnet 4.5.
Well, this isn't just about saving money.
When your model is smaller and faster, you can deploy it in places closed-source APIs can't reach:
↳ Real-time applications that need sub-second responses
↳ Edge devices where latency kills user experience
↳ On-premise systems where data never leaves your infrastructure
MiniMax-M2 runs with only 10B activated parameters. That efficiency means lower latency, higher throughput, and the ability to handle interactive agents without breaking the bank.
The intelligence-to-cost ratio here changes what's possible.
You're not choosing between quality and affordability anymore. You're not sacrificing privacy for performance. The gap is closing, and in many cases, it's already closed.
If you're building anything that needs to be fast, private, or deployed at scale, it's worth taking a look at what's now available.
MiniMax-M2 is 100% open-source, free for developers right now. I have shared the link to their GitHub repo in the next tweet.
You will also find the code for the playground and evaluations I've done.
Claude Skills might be the biggest upgrade to AI agents so far!
Some say it's even bigger than MCP.
I've been testing skills for the past 3-4 days, and they're solving a problem most people don't talk about: agents just keep forgetting everything.
In this video, I'll share everything I've learned so far.
It covers:
> The core idea (skills as SOPs for agents)
> Anatomy of a skill
> Skills vs. MCP vs. Projects vs. Subagents
> Building your own skill
> Hands-on example
Skills are the early signs of continual learning, and they can change how we work with agents forever!
Here's everything you need to know:
Skills vs. Projects vs. Subagents:
If you found it insightful, reshare with your network.
Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!