God of Prompt Profile picture
🔑 Sharing AI Prompts, Tips & Tricks. The Biggest Collection of AI Prompts & Guides for ChatGPT, Grok, Claude & Midjourney AI → https://t.co/vwZZ2VSfsN
26 subscribers
Nov 5 12 tweets 4 min read
Google Search is so dead ☠️

I’ve been using Perplexity AI for 6 months it now handles every research brief, competitor scan, and content outline for me.

Here’s how I replaced Google (and half my workflow) with a single AI tool: Image 1. Deep Research Mode

Prompt:

“You’re my research assistant. Find the latest studies, reports, and articles on [topic]. Summarize each source with: Title | Date | Key Finding | Source link.”

→ Returns citations + structured summaries faster than any Google search.
Nov 4 7 tweets 4 min read
🚨 China just built Wikipedia's replacement and it exposes the fatal flaw in how we store ALL human knowledge.

Most scientific knowledge compresses reasoning into conclusions. You get the "what" but not the "why." This radical compression creates what researchers call the "dark matter" of knowledge the invisible derivational chains connecting every scientific concept.

Their solution is insane: a Socrates AI agent that generates 3 million first-principles questions across 200 courses. Each question gets solved by MULTIPLE independent LLMs, then cross-validated for correctness.

The result? A verified Long Chain-of-Thought knowledge base where every concept traces back to fundamental principles.

But here's where it gets wild... they built the Brainstorm Search Engine that does "inverse knowledge search." Instead of asking "what is an Instanton," you retrieve ALL the reasoning chains that derive it: from quantum tunneling in double-well potentials to QCD vacuum structure to gravitational Hawking radiation to breakthroughs in 4D manifolds.

They call this the "dark matter" of knowledge finally made visible.

SciencePedia now contains 200,000 entries spanning math, physics, chemistry, biology, and engineering. Articles synthesized from these LCoT chains have 50% FEWER hallucinations and significantly higher knowledge density than GPT-4 baseline.

The kicker? Every connection is verifiable. Every reasoning chain is checked. No more trusting Wikipedia's citations you see the actual derivation from first principles.

This isn't just better search. It's externalizing the invisible network of reasoning that underpins all science.

The "dark matter" of human knowledge just became visible.Image The pipeline is genius.

A Planner generates problem thumbnails. A Generator expands them into specific questions with verifiable answers. Then multiple independent Solver agents (different LLMs) attack the same problem.

Only answers with consensus survive. Hallucinations get filtered automatically.Image
Oct 30 12 tweets 5 min read
Holy shit... Alibaba just dropped a 30B parameter AI agent that beats GPT-4o and DeepSeek-V3 at deep research using only 3.3B active parameters.

It's called Tongyi DeepResearch and it's completely open-source.

While everyone's scaling to 600B+ parameters, Alibaba proved you can build SOTA reasoning agents by being smarter about training, not bigger.

Here's what makes this insane:

The breakthrough isn't size it's the training paradigm.

Most AI labs do standard post-training (SFT + RL).

Alibaba added "agentic mid-training" a bridge phase that teaches the model how to think like an agent before it even learns specific tasks.

Think of it like this:

Pre-training = learning language
Agentic mid-training = learning how agents behave
Post-training = mastering specific agent tasks

This solves the alignment conflict where models try to learn agentic capabilities and user preferences simultaneously.

The data engine is fully synthetic.

Zero human annotation. Everything from PhD-level research questions to multi-hop reasoning chains is generated by AI.

They built a knowledge graph system that samples entities, injects uncertainty, and scales difficulty automatically.

20% of training samples exceed 32K tokens with 10+ tool invocations. That's superhuman complexity.

The results speak for themselves:

32.9% on Humanity's Last Exam (vs 26.6% OpenAI DeepResearch)
43.4% on BrowseComp (vs 30.0% DeepSeek-V3.1)
75.0% on xbench-DeepSearch (vs 70.0% GLM-4.5)
90.6% on FRAMES (highest score)

With Heavy Mode (parallel agents + synthesis), it hits 38.3% on HLE and 58.3% on BrowseComp.

What's wild: They trained this on 2 H100s for 2 days at <$500 cost for specific tasks.

Most AI companies burn millions scaling to 600B+ parameters.

Alibaba proved parameter efficiency + smart training >>> brute force scale.

The bigger story?

Agentic models are the future. Models that autonomously search, reason, code, and synthesize information across 128K context windows.

Tongyi DeepResearch just showed the entire industry they're overcomplicating it.

Full paper: arxiv. org/abs/2510.24701
GitHub: github. com/Alibaba-NLP/DeepResearchImage The architecture is beautifully simple.

It's vanilla ReAct (reasoning + acting) with context management to prevent memory overflow.

No complex multi-agent orchestration. No rigid prompt engineering.

Just pure scalable computation exactly what "The Bitter Lesson" predicted would win.Image
Oct 29 4 tweets 3 min read
deepmind just published something wild 🤯

they built an AI that discovers its own reinforcement learning algorithms.

not hyperparameter tuning.

not tweaking existing methods.

discovering ENTIRELY NEW learning rules from scratch.

and the algorithms it found were better than what humans designed.

here's what they did:

• created a meta-learning system that searches the space of possible RL algorithms
• let it explore millions of algorithmic variants automatically
• tested each on diverse tasks and environments
• kept the ones that worked, evolved them further
• discovered novel algorithms that outperform state-of-the-art human designs like DQN and PPO

the system found learning rules humans never thought of. update mechanisms with weird combinations of terms that shouldn't work but do.

credit assignment strategies that violate conventional RL wisdom but perform better empirically.

the discovered algorithms generalize across different tasks. they're not overfit to one benchmark.

they work like principled learning rules should, and they're interpretable enough to understand WHY they work.

we are discovering the fundamental math of how agents should learn.

led by david silver (alphago, alphazero creator). published in nature. fully reproducible.

the meta breakthrough:
we now have AI systems that can improve the way AI systems learn.

the thing everyone theorized about? it's here.Image why this breaks everything:

RL progress has been bottlenecked by human intuition.

researchers have insights, try variations, publish.

it takes years to go from Q-learning to DQN to PPO.

now you just let the machine search directly.

millions of variants in weeks instead of decades of human research.

but here's the compounding part:
each better learning algorithm can be used to discover even better ones.

you get recursive improvement in the narrow domain of how AI learns.

humans took 30+ years to get from basic Q-learning to modern deep RL.

an automated system can explore that space and find non-obvious improvements humans would never stumble on.

this is how you get to superhuman algorithm design.

not by making humans smarter, but by removing humans from the discovery loop entirely.

when david silver's lab publishes in nature about "machines discovering learning algorithms for themselves," you pay attention. this is the bootstrap beginning.

paper:
nature.com/articles/s4158…
Oct 21 7 tweets 3 min read
🚨 Academia just got an upgrade.

A new paper called Paper2Web might have just killed the static PDF forever.

It turns research papers into interactive websites complete with animations, videos, and embedded code using an AI agent called PWAgent.

Here’s why it’s a big deal:

• 10,700 papers analyzed to build the first dataset + benchmark for academic webpages.
• Evaluates sites on connectivity, completeness, and interactivity (even runs a “PaperQuiz” to test knowledge retention).
• Outperforms arXiv HTML and alphaXiv by 28%+ in structure and usability.

Essentially, it lets you publish living papers where readers can explore, interact, and even quiz themselves.

The PDF era is ending.

Your next research paper might talk back.

github. com/YuhangChen1/Paper2AllImage Today, most “HTML paper” attempts fail because they just convert text not meaning.

Paper2Web fixes that.

It built the first dataset of 10,700 paper–website pairs across top AI conferences to actually learn what makes research websites effective.

It’s not just tech it’s an entire academic web design benchmark.Image
Oct 20 8 tweets 4 min read
🚨 DeepSeek just did something wild.

They built an OCR system that compresses long text into vision tokens literally turning paragraphs into pixels.

Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need.

Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100.

This could solve one of AI’s biggest problems: long-context inefficiency.
Instead of paying more for longer sequences, models might soon see text instead of reading it.

The future of context compression might not be textual at all.
It might be optical 👁️

github. com/deepseek-ai/DeepSeek-OCRImage 1. Vision-Text Compression: The Core Idea

LLMs struggle with long documents because token usage scales quadratically with length.

DeepSeek-OCR flips that: instead of reading text, it encodes full documents as vision tokens each token representing a compressed piece of visual information.

Result: You can fit 10 pages worth of text into the same token budget it takes to process 1 page in GPT-4.Image
Oct 19 8 tweets 7 min read
everyone's arguing about whether ChatGPT or Claude is "smarter."

nobody noticed Anthropic just dropped something that makes the model debate irrelevant.

it's called Skills. and it's the first AI feature that actually solves the problem everyone complains about:

"why do I have to explain the same thing to AI every single time?"

here's what's different:

- you know how you've explained your brand guidelines to ChatGPT 47 times?
- or how you keep telling it "structure reports like this" over and over?
- or how every new chat means re-uploading context and re-explaining your process?

Skills ends that cycle.

you teach Claude your workflow once.

it applies it automatically. everywhere. forever.

but the real story isn't memory. it's how this changes what's possible with AI at work.Image here's the technical unlock that makes this actually work:

Skills use "progressive disclosure" instead of dumping everything into context.

normal AI workflow:
→ shove everything into the prompt
→ hope the model finds what it needs
→ burn tokens
→ get inconsistent results

Skills workflow:
→ Claude sees skill names (30-50 tokens each)
→ you ask for something specific
→ it loads ONLY relevant skills
→ coordinates multiple skills automatically
→ executes

example: you ask for a quarterly investor deck

Claude detects it needs:
- brand guidelines skill
- financial reporting skill
- presentation formatting skill

loads all three. coordinates them. outputs a deck that's on-brand, accurate, and properly formatted.

you didn't specify which skills to use.
you didn't explain how they work together.
Claude figured it out.

this is why it scales where prompting doesn't.Image
Oct 17 8 tweets 3 min read
Holy shit... Meta just cracked the art of scaling RL for LLMs.

For the first time ever, they showed that "reinforcement learning follows predictable scaling laws" just like pretraining.

Their new framework, 'ScaleRL', fits a sigmoid compute-performance curve that can forecast results from early training.

No more wasting 100k GPU hours to see if a method works you can predict it upfront.

They trained across '400,000 GPU hours', tested every major RL recipe (GRPO, DAPO, Magistral, Minimax), and found the hidden truth:

> Some RL methods scale beautifully. Others hit a hard ceiling, no matter the compute.

ScaleRL nails both stability and predictability even at 100,000 GPU-hours.

We finally have scaling laws for RL.

This is how post-training becomes a science, not an experiment.

Read full 🧵Image Today, everyone talks about scaling models.

But Meta just proved we’ve been ignoring the harder problem scaling reinforcement learning compute.

Turns out, most RL methods don’t scale like pretraining.

They plateau early burning millions in compute for almost no gain.

ScaleRL is the first recipe that doesn’t.
Oct 9 4 tweets 2 min read
Forget boring websites.

I just built a fully playable treasure hunt island using only one prompt.

Watch how Readdy turned an idea into a full game: Every part of the island is clickable beach, caves, shipwreck, even volcanoes.

The Readdy Agent acts as your pirate NPC:

“Ahoy! You found a golden coin!”
“Nothing here, matey try the palm tree!”

It reacts, jokes, and collects leads like a pro.

It’s not just for fun.

Readdy can turn games into growth tools.

Your site can:

- Collect emails
- Chat with visitors in real time
- Schedule calls or demos

All from inside a game-like world.
Oct 9 11 tweets 4 min read
R.I.P Harvard MBA.

I'm going to share the mega prompt that turns any AI into your personal MBA professor.

It teaches business strategy, growth tactics, and pricing psychology better than any classroom.

Here's the mega prompt you can copy & paste in any LLM ↓ Image Today, most business education is outdated the moment you learn it.

Markets shift. Competition evolves. Customer behavior changes weekly.

Traditional MBA programs can't keep up. They teach case studies from 2015 while you're building in 2025.

This prompt fixes that.
Oct 6 10 tweets 4 min read
This is fucking brilliant.

Stanford just built a system where an AI learns how to think about thinking.

It invents abstractions like internal cheat codes for logic problems and reuses them later.

They call it RLAD.

Here's the full breakdown: Image The idea is brutally simple:

Instead of making LLMs extend their chain-of-thought endlessly,
make them summarize what worked and what didn’t across attempts
then reason using those summaries.

They call those summaries reasoning abstractions.

Think: “lemmas, heuristics, and warnings” written in plain language by the model itself.Image
Oct 5 5 tweets 2 min read
Everyone’s chasing “magic prompts.”

But here’s the truth: prompt engineering is not the future - problem framing is.

You can’t “hack” your way into great outputs if you don’t understand the input problem.
The smartest AI teams don’t ask “what’s the best prompt?” - they ask “what exactly are we solving?”

Before typing anything into ChatGPT, do this:

1️⃣ Define the goal - what outcome do you actually want?
2️⃣ Map constraints - time, data, resources, accuracy.
3️⃣ Identify levers - what can you change, what can’t you?
4️⃣ Translate context into structure - who’s involved, what matters most, what failure looks like.
5️⃣ Then prompt - not for an answer, but for exploration.

AI isn’t a genie. It’s a mirror for your thinking.
If your question is shallow, your output will be too. The best “prompt engineers” aren’t writers - they’re problem architects.

They understand psychology, systems, and tradeoffs.

Their secret isn’t phrasing - it’s clarity.
Prompting is the last step, not the first.
Oct 4 13 tweets 5 min read
Anthropic's internal prompting style is completely different from what most people teach.

I spent 3 weeks analyzing their official prompt library, documentation, and API examples.

Here's every secret I extracted 👇 First discovery: they're obsessed with XML tags.

Not markdown. Not JSON formatting. XML.

Why? Because Claude was trained to recognize structure through tags, not just content.

Look at how Anthropic writes prompts vs how everyone else does it:

Everyone else:

You are a legal analyst. Analyze this contract and identify risks.

Anthropic's way:

Legal analyst with 15 years of M&A experience


Analyze the following contract for potential legal risks



- Focus on liability clauses
- Flag ambiguous termination language
- Note jurisdiction conflicts


The difference? Claude can parse the structure before processing content. It knows exactly what each piece of information represents.Image
Sep 29 12 tweets 3 min read
Matthew McConaughey just asked for something on Joe Rogan that most people don't know they can already do.

He wants an AI trained on his books, interests, and everything he cares about.

Here's how to build your own personal AI using ChatGPT or Claude: Image ChatGPT: Use Custom GPTs

Go to ChatGPT, click "Explore GPTs," then "Create."

Upload your files: PDFs of books you've read, notes, blog posts you've saved, journal entries, anything text-based.

Give it instructions like: "You are my personal knowledge assistant. Answer questions using only the uploaded materials and my worldview."
Sep 28 11 tweets 3 min read
If you're new to n8n, this post will save your business.

Every tutorial skips the part where costs spiral out of control.

I've built 30+ AI agents with n8n and tracked every dollar spent.

Here's the brutal truth about costs that nobody talks about: 1. The hidden cost killer: API calls.

Your "simple" customer service agent makes 15+ API calls per conversation:

3 calls to check context
4 calls for intent classification
5 calls for response generation
3 calls for follow-up logic

At $0.002 per call, that's $0.03 per conversation. Sounds cheap until you hit 10k conversations.
Sep 26 15 tweets 3 min read
Fuck it.

I'm sharing the JSON prompting secrets that saved me from 6 months of broken AI agents.

Most developers are building agents that crash because they can't write a proper JSON prompt.

Here's everything I learned from debugging 500+ agent failures: Image 1. The Golden Rule of JSON Prompting:

Never assume the model knows what you want.

Bad prompt:

```
"Return a JSON with user info"
```

Good prompt:

```
Return a JSON object with exactly these fields:
{
"name": "string - full name",
"email": "string - valid email address",
"age": "number - integer between 18-100"
}
```

Specificity kills ambiguity.
Sep 24 9 tweets 3 min read
Forget Zapier. Forget Notion. Forget Airtable.

The “AI OS” era is here.

Cockpit AI just launched and it’s changing how startups operate.

Imagine automating $200K worth of business intelligence with a single AI OS.

Here’s how it works: Image Many companies have a big problem:

- Information is in lots of different places
- Workers spend most of their time making reports by hand
- Leaders decide things using old information

OnCockpit fixed this with smart computer.

oncockpit.aiImage
Sep 23 9 tweets 3 min read
RIP Tableau.
RIP Excel dashboards.
RIP Canva charts.

Meet Bricks: The AI tool that turns raw data into dashboards in seconds.

Here’s how (its literally mind blowing) 👇 Bricks is your AI data analyst.

Upload your data → get a beautiful, interactive dashboard instantly.

✅ Charts
✅ KPIs
✅ Analysis
✅ Themes
✅ Exports

No more hours formatting in Excel, PowerBI, or Canva.

thebricks.com
Sep 23 7 tweets 3 min read
Google just dropped a 64-page guide on AI agents that's basically a reality check for everyone building agents right now.

The brutal truth: most agent projects will fail in production. Not because the models aren't good enough, but because nobody's doing the unsexy operational work that actually matters.

While startups are shipping agent demos and "autonomous workflows," Google is introducing AgentOps - their version of MLOps for agents. It's an admission that the current "wire up some prompts and ship it" approach is fundamentally broken.

The guide breaks down agent evaluation into four layers most builders ignore:

- Component testing for deterministic parts
- Trajectory evaluation for reasoning processes
- Outcome evaluation for semantic correctness
- System monitoring for production performance

Most "AI agents" I see barely handle layer one. They're expensive chatbots with function calling, not robust systems.

Google's Agent Development Kit (ADK) comes with full DevOps infrastructure out of the box. Terraform configs, CI/CD pipelines, monitoring dashboards, evaluation frameworks. It's the antithesis of the "move fast and break things" mentality dominating AI development.

The technical depth is solid. Sequential agents for linear workflows, parallel agents for independent tasks, loop agents for iterative processes. These patterns matter when building actual business automation, not just demos.

But there's a gap between Google's enterprise vision and startup reality. Most founders don't need "globally distributed agent fleets with ACID compliance." They need agents that handle customer support without hallucinating.

The security section is sobering. These agents give LLMs access to internal APIs and databases. The attack surface is enormous, and most teams treat security as an afterthought.

Google's strategic bet: the current wave of agent experimentation will create demand for serious infrastructure. They're positioning as the grown-up choice when startups realize their prototypes can't scale.

The real insight isn't technical - it's that if you're building agents without thinking about evaluation frameworks, observability, and operational reliability, you're building toys, not tools.

The agent economy everyone's predicting will only happen when we stop treating agents like chatbots with extra steps and start building them like the distributed systems they actually are.Image The guide reveals Google's three-path strategy for agent development.

Most teams are randomly picking tools without understanding these architectural choices. Image
Sep 22 13 tweets 3 min read
You don’t need VC money.

You don’t need to code.

You just need this toolkit... and a weekend.

Follow the 🧵: 1/ This is a rare window - and it won't last.

Right now, tech is ahead of adoption.

You can build powerful tools without knowing how to code.

Soon, the crowd catches up. This is your early mover edge.
Sep 22 12 tweets 4 min read
Google just dropped the biggest Chrome update in history.

And 99% of people have no idea it happened.

10 new AI features that turn your browser into an intelligent assistant:

1/ Gemini is now built directly into Chrome. Ask it to explain complex information on any webpage. Compare data across multiple tabs. Summarize research from different sources.

No more copy-pasting between ChatGPT and your browser.