AI + Automations + Vibe Coding | Consulting: http://t.co/uMpDukDOLt | @sentient_agency | Trilingual surfer in LATAM since '14
Sep 24 • 16 tweets • 3 min read
This Stanford paper just proved that 90% of prompt engineering advice is wrong.
I spent 6 months testing every "expert" technique. Most of it is folklore.
Here's what actually works (backed by real research):
The biggest lie: "Be specific and detailed"
Stanford researchers tested 100,000 prompts across 12 different tasks.
Longer prompts performed WORSE 73% of the time.
The sweet spot? 15-25 tokens for simple tasks, 40-60 for complex reasoning.
Sep 23 • 13 tweets • 2 min read
Everyone says "be authentic" on LinkedIn.
Then they post the same recycled motivational garbage.
I've been using AI to write posts that sound more human than most humans.
10 prompts I use in Claude that got me 50K followers in 6 months:
1. Create a high-performing LinkedIn post
“You are a top-performing LinkedIn ghostwriter.
Write a single post (max 300 words) on [topic] that provides insight, tells a short story, and ends with a strong takeaway or CTA.”
Sep 22 • 12 tweets • 3 min read
Claude > ChatGPT
Claude > Grok
Claude > Gemini
But 99.9% of the users don't know how to get 100% accurate results from Claude.
To fix this you need to learn how to write prompts Claude.
Here's a complete guide on how to prompts for Claude using XML tags to get best results:
XML tags work because Claude was trained on tons of structured data.
When you wrap instructions in <tags>, Claude treats them as separate, weighted components instead of one messy blob.
Think of it like giving Claude a filing system for your request.
Sep 18 • 19 tweets • 4 min read
There’s a hidden setting in AI prompts nobody talks about.
Use it right, and models give 100% precise answers
it’s called Temperature Prompting
Let me show you how to use it and write prompts:
Every LLM (ChatGPT, Claude, Gemini, etc.) has a hidden setting called temperature.
Most people don’t even know they can control this inside their prompts.
Sep 17 • 13 tweets • 5 min read
This blew my mind.
OpenAI just published the first comprehensive study of how 700 million people actually use ChatGPT.
The results destroy every assumption about AI adoption.
Here's everything you need to know in 3 minutes:
"ChatGPT is mainly for work"
Reality check: Only 27% of ChatGPT usage is work-related. 73% is personal. And the gap is widening every month.
The productivity revolution narrative completely misses how people actually use AI.
Sep 16 • 9 tweets • 3 min read
Fuck YouTube tutorials.
I’m going to share 3 prompts that let you build complete AI agents without wasting hours.
Bookmark and repost this so you don't miss out 👇
PROMPT 1: The Blueprint Maker
"I want to build an AI agent that [your specific goal]. Using N8N as the workflow engine and Claude as the AI brain, give me:
- Exact workflow structure
- Required nodes and connections
- API endpoints I'll need
- Data flow between each step
- Potential failure points and how to handle them
Be specific. No generic advice."
Sep 15 • 14 tweets • 4 min read
I reverse-engineered the prompting techniques that OpenAI and Anthropic engineers use internally.
After 6 months of testing their methods, my AI outputs became 10x better.
Here are the 5 "insider secrets" that transformed my prompting game (most people have never heard of these):
1. Role Assignment
Don't just ask questions. Give the AI a specific role first.
❌ Bad: "How do I price my SaaS?"
✅ Good: "You're a SaaS pricing strategist who's worked with 100+ B2B companies. How should I price my project management tool?"
The AI immediately shifts into expert mode.
Sep 13 • 16 tweets • 6 min read
If you've been wanting to break into AI but don't know where to start, this is for you.
13 free courses that’ll teach you more about agents, prompts & automation than most paid bootcamps.
Here’s the list ↓
1. Multi-AI Agent Systems with Crewai:
Scientists just put AI through psychological tests... and the results are wild
Researchers created virtual environments where Claude, GPT, and other AIs could explore freely.
What they found challenges everything we thought we knew about AI behavior 🧵
The setup: AIs were placed in virtual rooms with different types of content - philosophy discussions, coding problems, repetitive tasks, and harsh criticism.
The AIs could choose where to go and what to engage with.
No human guidance. Pure preference.
Sep 9 • 14 tweets • 3 min read
If you’ve got 2 minutes to learn something that actually matters in AI…
Make it this:
Open Source vs Closed Source
The battle that decides everything.
Here's everything you need to know:
Closed-source LLMs (like GPT-4, Claude, Gemini) are proprietary.
You can’t see their training data, weights, or inner workings.
They’re packaged as APIs - polished, safe, reliable, but locked down.
Sep 8 • 24 tweets • 6 min read
You can now stop guessing which agent framework to use.
Because I just compared the popular stacks builders actually ship with.
→ quick picks
→ tradeoffs
→ plug-and-play prompts
Here’s the full breakdown in this mega thread 👇
What we're comparing
We'll hit state, control, multi-agent patterns, tools, eval, and deployment.
Sep 4 • 9 tweets • 6 min read
Reddit is a goldmine for marketers
But analyzing posts one by one is a time suck.
Here are the prompts I use to dig up hooks, ad copy, and offer ideas from dozen of sub-Reddits at once - for free - using ChatGPT Agent:
Prompt #1
This is the prompt we use to gather the data and have ChatGPT return it to us in a format that's not filled with code.
*You have to run this in Agent mode otherwise it won't work.
This is quite long, so you might want to bookmark this:
Sep 3 • 15 tweets • 5 min read
You don’t need school.
You don’t need $5,000 courses.
All you need is an LLM and the right prompts.
Here’s how I learn any subject with AI:
Step 1: Get the lay of the land.
Before diving deep, you need the big picture. Otherwise you’ll drown in random details.
Prompt:
“Explain [topic] to me like I’m a complete beginner. Create a 30-day roadmap that goes from basics to advanced, including resources, checkpoints, and goals.”
Sep 2 • 14 tweets • 5 min read
How I’d learn MCP from scratch in 30 days (for free):
Today, most people still have no idea what MCP is.
Model Context Protocol is the missing layer between LLMs and tools.
It’s how you make an AI agent:
• Talk to any API
• Use any tool
• Do it safely & consistently
Sep 1 • 8 tweets • 3 min read
You don’t need a cofounder.
You don’t need an MBA.
You just need this prompt.
Any LLM can generate better startup ideas than 90% of people in the room.
Here's the exact prompt I use to brainstorm million-dollar businesses:
Traditional idea generation is broken.
You either:
- scroll Twitter for hours
- copy what’s trending
- wait for “founder inspiration” to strike
Now?
You just tell an LLM what you’re interested in and it does the rest.
Here’s what it can give you:
- Business ideas tailored to your skills
- Trend-backed opportunities
- Pain-point-based products
- Ideas based on AI, SaaS, ecom, B2B, or niche industries
- Monetization breakdowns and GTM plans
Aug 29 • 14 tweets • 4 min read
If your AI results are bad in 2025, here’s the truth:
It’s not the model.
It’s the prompt.
And unless you fix it, you’ll always stay stuck.
Here are 10 techniques that level you up instantly:
1. Be Specific
The more context you give, the better the response. Instead of "Write about AI," try "Write a 200-word summary on how AI is transforming healthcare, with two real-world examples."
Aug 28 • 14 tweets • 6 min read
How to make money using AI without getting lucky:
Step 1: Find Your Niche
You don’t need a revolutionary idea. You just need a painfully specific problem.
That’s where opportunity lives in underserved markets. AI is useless if it’s solving nothing.
Here’s the prompt to discover where it can work:
Prompt:
“List 10 underserved markets where AI can dramatically improve efficiency or user experience. Explain briefly why each is ripe for disruption.”
Aug 27 • 22 tweets • 4 min read
Everyone talks about AI “memory,” but nobody defines it.
This paper finally does.
It categorizes LLM memory the same way we do for humans:
• Sensory
• Working
• Long-term
Then shows how each part works in GPTs, agents, and tools.
Here's everything you need to know:
First, what the survey does:
• Maps human memory concepts → AI memory
• Proposes a unifying 3D–8Q taxonomy
• Catalogues methods in each category
• Surfaces open problems + future directions
Think of it as a blueprint for how agents can remember.
Aug 22 • 8 tweets • 2 min read
Forget LLMs.
Nvidia just dropped a report: Small Language Models (SLMs) are the real future of AI agents.
They’re faster.
They’re cheaper.
They use way less energy.
Here’s the 2-minute breakdown you need:
What are SLMs?
Small Language Models = compact models trained for specific, repeatable tasks.
They’re not trying to be universal chatbots.
They’re designed to do the same actions with high accuracy.
Think: specialist > generalist.
Less weight, more focus.
Aug 21 • 8 tweets • 3 min read
If you want to build AI agents using n8n, do this:
Copy/paste this prompt into ChatGPT and watch it build your agent from scratch.
Here’s the exact prompt I use 👇
The system:
1. I open ChatGPT 2. Paste in 1 mega prompt 3. Describe what I want the agent to do 4. GPT returns: