Akshay 🚀 Profile picture
Jul 16 14 tweets 5 min read Read on X
Let's build a multi-agent content creation system (100% local):
Before we dive in, here's a quick demo of what we're building!

Tech stack:

- @motiadev as the unified backend framework
- @firecrawl_dev to scrape web content
- @ollama to locally serve Deepseek-R1 LLM

The only AI framework you'll ever need to learn! 🚀
Here's the workflow:

- User submits URL to scrape
- Firecrawl scrapes content and converts it to markdown
- Twitter and LinkedIn agents run in parallel to generate content
- Generated content gets scheduled via Typefully

Now, let's dive into code!
Steps are the fundamental building blocks of Motia.

They consist of two main components:

1️⃣ The Config object: It instructs Motia on how to interact with a step.

2️⃣ The handler function: It defines the main logic of a step.

Check this out 👇 Image
With that understanding in mind, let's start building our content creation workflow... 👇
Step 1: Entry point (API)

We start our content generation workflow by defining an API step that takes in a URL from the user via a POST request.

Check this out 👇 Image
Step 2: Web scraping

This step scrapes the article content using Firecrawl and emits the next step in the workflow.

Steps can be connected together in a sequence, where the output of one step becomes the input for another.

Check this out 👇 Image
Step 3: Content generation

The scraped content gets fed to the X and LinkedIn agents that run in parallel and generate curated posts.

We define all our prompting and AI logic in the handler that runs automatically when a step is triggered.

Check this out 👇 Image
Step 4: Scheduling

After the content is generated we draft it in Typefully where we can easily review our social media posts.

Motia also allows us to mix and match different languages within the same workflow providing great flexibility.

Check this typescript code 👇 Image
After defining our steps, we install required dependencies using `npm install` and run the Motia workbench using `npm run dev` commands.

Check this out 👇 Image
Motia workbench provides an interactive UI to help build, monitor and debug our flows.

With one-click you can also deploy it to cloud! 🚀

Check this out 👇
This project is built using Motia!

Motia is a unified system where APIs, background jobs, events, and agents are just plug-and-play steps.

100% open-source, Check this out👇
github.com/MotiaDev/motia
To summarise here's how it works

- User submits URL to scrape
- Firecrawl scrapes content and converts to markdown
- Twitter and LinkedIn agents run in parallel to generate content
- Generated content gets scheduled via Typefully
If you found it insightful, reshare with your network.

Find me → @akshay_pachaar ✔️
For more insights and tutorials on LLMs, AI Agents, and Machine Learning!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Akshay 🚀

Akshay 🚀 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @akshay_pachaar

Jul 14
ML researchers just built a new ensemble technique.

It even outperforms XGBoost, CatBoost, and LightGBM.

Here's a complete breakdown (explained visually):
For years, gradient boosting has been the go-to for tabular learning.

TabM is a parameter-efficient ensemble that provides:
- The speed of an MLP.
- The accuracy of GBDT.

The visual below explains how it works.

Let's dive in!
In tabular ML:

- MLPs are simple and fast, but usually underperform on tabular data.
- Deep ensembles are accurate but bloated and slow.
- Transformers are powerful but rarely practical on tables.

The image below depicts an MLP ensemble, and it looks heavily parameterized👇
Read 8 tweets
Jul 12
A Crash Course on Building AI Agents!

Here's what it covers:

- What is an AI agent
- Connecting agents to tools
- Overview of MCP
- Replacing tools with MCP servers
- Setting up observability and tracing

All with 100% open-source tools!
This course builds agents based on the following definition:

An AI agent uses an LLM as its brain, has memory to retain context, and can take real-world actions through tools, like browsing web, running code, etc.

In short, it thinks, remembers, and acts.
100% open-source tech stack:

- @crewAIInc for building MCP ready agents
- @zep_ai Graphiti to add human like memory
- @Cometml Opik for observability and tracing.

You can find the entire code here: github.com/patchy631/ai-e…
Read 5 tweets
Jul 11
MCP is on fire.

AI agents can now talk to real world tools, apps and actually get stuff done.

This changes everything.

Here are 10 amazing examples:
1️⃣ WhatsApp MCP

Exchange images, videos, and voice notes on WhatsApp!

Pair it with the ElevenLabs MCP server for AI-powered transcription & audio messages with 3,000+ voices.

Check this out👇
2️⃣ MCP-powered Agentic RAG

I created this server for Cursor and lets it perform deep web searches, as well as RAG over a specified directory.

Everything from the comforts of your IDE:
Read 12 tweets
Jul 10
90% of Python programmers don't know these 11 ways to declare type hints:
Type hints are incredibly valuable for improving code quality and maintainability.

Today, I'll walk you through 11 must-know principles to declare type hints in just two minutes.

Let's begin! 🚀 Image
1️⃣ Type hints for standard Python objects:

The most basic (and must-know) way to declare type hints for standard Python objects is as follows👇 Image
Read 15 tweets
Jul 7
Temperature in LLMs, clearly explained (with code):
Let's prompt OpenAI GPT-3.5 with a low temperature value twice.

It produces identical responses from the LLM.

Check the response below👇 Image
Now, let's prompt it with a high temperature value.

This time, it produces a gibberish output. Check the output below👇

What is going on here? Let's dive in! Image
Read 9 tweets
Jul 3
7 MCP projects for AI Engineers (with video tutorials):
1️⃣ MCP meets Ollama

An MCP client is a component in an AI app (like Cursor) that establishes connections to external tools.

Learn how to build it 100% locally.

Full walkthrough:
2️⃣ MCP-powered shared memory for Claude Desktop and Cursor

Devs use Claude Desktop and Cursor independently.

Learn how to add a knowledge graph based common memory layer to cross-operate without losing context.

Full walkthrough:
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(