The new iOS app from @runwayml featuring #Gen1 is 🔥 and now widely available!
- Turn anything into everything using your phone's camera
- Revive forgotten videos on your camera roll
- Effortlessly transfer assets to your Runway account without airdropping!
Postman's AI Agent Builder lets you turn any API (from over 100,000!) into an MCP server in seconds, no code required 🤯
Your custom MCP server, ready to use in Cursor, Windsurf, Claude Desktop, Docker, plus a lot more! 🧵↓
1/
First, start here →
You’ve got literally 100,000+ APIs to check out.
1. mix and match any endpoints you want 2. download your custom zip file 3. that’s it! postman.com/explore/mcp-ge…
Mind = blown.
That zip file has EVERYTHING:
↳ a readme with setup instructions
↳ your selected endpoints
↳ all the files to run your MCP server locally, on Cursor, Windsurf… even Docker!
You also get an .env file with your prefilled variables → just add your API keys! 🔥
MIT and Oxford just released their $2,500 agentic AI curriculum on GitHub at no cost.
15,000 people already paid for it.
Now it's on GitHub!
It covers patterns, orchestration, memory, coordination, and deployment.
A strong roadmap to production ready systems.
Repo in 🧵 ↓
10 chapters:
Part 1. What agents are and how they differ from plain generative AI.
Part 2. The four agent types and when to use each.
Part 3. How tools work and how to build them.
Part 4. RAG vs agentic RAG and key patterns.
Part 5. What MCP is and why it matters.
Part 6. How agents plan with reasoning models.
Part 7. Memory systems and architecture choices.
Part 8. Multi agent coordination and scaling.
Part 9. Real world production case studies.
Part 10. Industry trends and what is coming next.
He calls it KERNEL, and it's transformed how his entire team uses AI.
Here's the framework:
----
K - Keep it simple
Bad: 500 words of context
Good: One clear goal
Example: Instead of "I need help writing something about Redis," use "Write a technical tutorial on Redis caching"
Result: 70% less token usage, 3x faster responses
----
E - Easy to verify
Your prompt needs clear success criteria
Replace "make it engaging" with "include 3 code examples"
If you can't verify success, AI can't deliver it
My testing: 85% success rate with clear criteria vs 41% without
----
R - Reproducible results
Avoid temporal references ("current trends", "latest best practices")
Use specific versions and exact requirements
Same prompt should work next week, next month
94% consistency across 30 days in my tests
----
N - Narrow scope
One prompt = one goal
Don't combine code + docs + tests in one request
Split complex tasks
Single-goal prompts: 89% satisfaction vs 41% for multi-goal
----
E - Explicit constraints
Tell AI what NOT to do
"Python code" → "Python code. No external libraries. No functions over 20 lines."
Constraints reduce unwanted outputs by 91%
----
L - Logical structure Format every prompt like:
Context (input)
Task (function)
Constraints (parameters)
Format (output)
----
Real example from my work last week:
Before KERNEL: "Help me write a script to process some data files and make them more efficient"
Result: 200 lines of generic, unusable code
After KERNEL:
Task: Python script to merge CSVs
Input: Multiple CSVs, same columns
Constraints: Pandas only, <50 lines
Output: Single merged.csv
Verify: Run on test_data/
Result: 37 lines, worked on first try
----
Actual metrics from applying KERNEL to 1000 prompts:
First-try success: 72% → 94%
Time to useful result: -67%
Token usage: -58%
Accuracy improvement: +340%
Revisions needed: 3.2 → 0.4
----
Advanced tip from this user:
Chain multiple KERNEL prompts instead of writing complex ones.
Each prompt does one thing well, feeds into the next.