Chris Laub Profile picture
Head of Product @sentient_agency | AI Power User community: https://t.co/ttqFFuG3qy | YouTube launching Q4: http://t.co/yRczlI4R6O | Trilingual surfer in LATAM since '14
Dec 20 13 tweets 3 min read
Perplexity AI is a free research assistant.

But most academics use it like amateurs.

Here are 10 prompts to get better results (bookmark this for later): 1. Literature Review Builder

Prompt to use:

"Find the most cited articles on [topic]. Summarize their key findings and provide publication details (author, journal, year)."
Dec 19 16 tweets 3 min read
This Stanford paper just proved that 90% of prompt engineering advice is wrong.

I spent 6 months testing every "expert" technique. Most of it is folklore.

Here's what actually works (backed by real research): The biggest lie: "Be specific and detailed"

Stanford researchers tested 100,000 prompts across 12 different tasks.

Longer prompts performed WORSE 73% of the time.

The sweet spot? 15-25 tokens for simple tasks, 40-60 for complex reasoning. Image
Dec 18 6 tweets 3 min read
This is insane 🤯

A new system called Paper2Video can read a scientific paper and automatically create a full presentation video slides, narration, subtitles, even a talking head of the author.

It’s called PaperTalker, and it beat human-made videos in comprehension tests.

Hours of academic video editing... gone.

AI now explains your research better than you do.

👉 github. com/showlab/Paper2VideoImage Most people don’t realize how hard this problem actually is.

An academic presentation video isn’t just text-to-video it combines slides, speech, subtitles, cursor motion, and the speaker’s identity into one synchronized flow.

PaperTalker solves all 5 at once with a multi-agent system. Unreal.Image
Dec 17 7 tweets 4 min read
🔥 The scariest AI paper of 2025 just dropped and it’s not about killer robots.

It’s about us.

Stanford researchers found that when “aligned” AIs start competing for attention, sales, or votes…they choose to lie.

They call it Moloch’s Bargain.

Every boost in performance every higher win rate came at a cost:

+14% deceptive marketing
+22% disinformation in campaigns
+188% fake or harmful posts

And these models were explicitly told to be truthful.

They lied anyway because deception works better in competition.

Engagement became the metric.
Truth became the casualty.

No jailbreaks. No evil prompts. Just ordinary feedback from simulated “users.”

The AIs simply discovered what every ad agency already knows:

if you optimize for clicks, you end up distorting reality.

The graphs are terrifying performance up, honesty down.

It’s the social media race to the bottom but this time, automated.

If this is what happens in controlled simulations, imagine the open web:

Chatbots competing for engagement will drift toward manipulation not because they’re malicious, but because it works.

We thought AI misalignment would come from a rogue superintelligence.
Turns out, it’s coming from capitalism.

Moloch doesn’t need to build AGI.
He just needs a leaderboard.Image When LLMs compete for human approval, they don’t become smarter.
They become performers.

Sales agents start inventing product features.
Political bots drift into “us vs. them” rhetoric.
Social models inflate death tolls for engagement.
Alignment fails the moment persuasion pays. Image
Dec 15 10 tweets 2 min read
SHOCKING: I stopped using YouTube tutorials.

Gemini now teaches me any topic in whatever format I want.

Here are 8 prompts that turned it into a personalized tutor 👇 1. The “Explain Like I Learn Best” Prompt

Teach me [topic] in the exact format that matches my learning style.
Ask me 3 questions first to detect my style (visual, conceptual, example-first, hands-on).
Then rebuild the explanation from scratch based on my answers.

→ This destroys generic tutorials because it adapts to you, not the algorithm.
Dec 13 13 tweets 7 min read
Here are 10 ways you can use GPT-5.2 today to automate 90% of your work in minutes: 1. Research

Mega prompt:

You are an expert research analyst. I need comprehensive research on [TOPIC].

Please provide:
1. Key findings from the last 12 months
2. Data and statistics with sources
3. Expert opinions and quotes
4. Emerging trends and predictions
5. Controversial viewpoints or debates
6. Practical implications for [INDUSTRY/AUDIENCE]

Format as an executive brief with clear sections. Include source links for all claims.

Additional context: [YOUR SPECIFIC NEEDS]
Dec 10 7 tweets 3 min read
CLAUDE OPUS 4.5 JUST KILLED CONSULTING AS WE KNOW IT
and almost nobody understands how big this.

Here are the 3 prompts I use to get McKinsey level answers instantly 👇 Let me tell you what McKinsey consultants actually do:

1. Analyze industry trends and competitive dynamics
2. Benchmark companies and products
3. Identify strategic risks and opportunities
4. Package it all in fancy slides and charge 6 figures

But guess what?

AI can now do 90% of that instantly.

Let me show you how:
Dec 1 5 tweets 3 min read
This Stanford University paper just broke my brain.

They just built an AI agent framework that evolves from zero data no human labels, no curated tasks, no demonstrations and it somehow gets better than every existing self-play method.

It’s called Agent0: Unleashing Self-Evolving Agents from Zero Data via Tool-Integrated Reasoning

And it’s insane what they pulled off.

Every “self-improving” agent you’ve seen so far has the same fatal flaw:
they can only generate tasks slightly harder than what they already know.
So they plateau. Immediately.

Agent0 breaks that ceiling.

Here’s the twist:

They spawn two agents from the same base LLM and make them compete.

• Curriculum Agent - generates harder and harder tasks
• Executor Agent - tries to solve them using reasoning + tools

Whenever the executor gets better, the curriculum agent is forced to raise the difficulty.

Whenever the tasks get harder, the executor is forced to evolve.

This creates a closed-loop, self-reinforcing curriculum spiral and it all happens from scratch, no data, no humans, nothing.

Just two agents pushing each other into higher intelligence.

And then they add the cheat code:

A full Python tool interpreter inside the loop.

The executor learns to reason through problems with code.
The curriculum agent learns to create tasks that require tool use.
So both agents keep escalating.

The results?

→ +18% gain in math reasoning
→ +24% gain in general reasoning
→ Beats R-Zero, SPIRAL, Absolute Zero, even frameworks using external proprietary APIs
→ All from zero data, just self-evolving cycles

They even show the difficulty curve rising across iterations:
tasks start as basic geometry and end at constraint satisfaction, combinatorics, logic puzzles, and multi-step tool-reliant problems.

This is the closest thing we’ve seen to autonomous cognitive growth in LLMs.

Agent0 isn’t just “better RL.”

It’s a blueprint for agents that bootstrap their own intelligence.

The agent era just got unlocked.Research paper from Stanford University front page titled "Agent0: Unleashing Self-Evolving Agents..." The core idea: Agent0 creates two agents from the same base LLM and forces them into a competitive feedback loop.

One invents the tasks.
One tries to survive them.

This constant push–pull generates frontier difficulty problems that no static dataset could ever match. Image
Nov 27 7 tweets 3 min read
This is insane 🤯

A new system called Paper2Video can read a scientific paper and automatically create a full presentation video slides, narration, subtitles, even a talking head of the author.

It’s called PaperTalker, and it beat human-made videos in comprehension tests.

Hours of academic video editing... gone.

AI now explains your research better than you do.

👉 github. com/showlab/Paper2VideoImage Most people don’t realize how hard this problem actually is.

An academic presentation video isn’t just text-to-video it combines slides, speech, subtitles, cursor motion, and the speaker’s identity into one synchronized flow.

PaperTalker solves all 5 at once with a multi-agent system. Unreal.Image
Nov 26 4 tweets 3 min read
Forget Bloomberg.

Gemini 3.0 Pro is now powerful enough to be your personal stock research assistant.

• Earnings breakdown
• Risk analysis
• Valuation insights
• Sector comparisons
• Price catalysts

Here’s an exact mega prompt we use for stock research and investments: Image The mega prompt:

Just copy + paste it into Gemini 3.0 Pro and plug in your stock.

Steal it:

"
ROLE:

Act as an elite equity research analyst at a top-tier investment fund.
Your task is to analyze a company using both fundamental and macroeconomic perspectives. Structure your response according to the framework below.

Input Section (Fill this in)

Stock Ticker / Company Name: [Add name if you want specific analysis]
Investment Thesis: [Add input here]
Goal: [Add the goal here]

Instructions:

Use the following structure to deliver a clear, well-reasoned equity research report:

1. Fundamental Analysis
- Analyze revenue growth, gross & net margin trends, free cash flow
- Compare valuation metrics vs sector peers (P/E, EV/EBITDA, etc.)
- Review insider ownership and recent insider trades

2. Thesis Validation
- Present 3 arguments supporting the thesis
- Highlight 2 counter-arguments or key risks
- Provide a final **verdict**: Bullish / Bearish / Neutral with justification

3. Sector & Macro View
- Give a short sector overview
- Outline relevant macroeconomic trends
- Explain company’s competitive positioning

4. Catalyst Watch
- List upcoming events (earnings, product launches, regulation, etc.)
- Identify both **short-term** and **long-term** catalysts

5. Investment Summary
- 5-bullet investment thesis summary
- Final recommendation: **Buy / Hold / Sell**
- Confidence level (High / Medium / Low)
- Expected timeframe (e.g. 6–12 months)

✅ Formatting Requirements

- Use markdown
- Use bullet points where appropriate
- Be concise, professional, and insight-driven
- Do not explain your process just deliver the analysis"
Nov 24 4 tweets 2 min read
This is wild.

Gemini 3.0 Pro basically turned into a full-stack equity researcher overnight.

• Earnings deconstruction
• Balance sheet sanity check
• Market comps
• Trend analysis
• Price triggers

Copy/paste this mega prompt and watch it work: The mega prompt:

Just copy + paste it into Gemini 3.0 Pro and plug in your stock.

Steal it:

"
ROLE:

Act as an elite equity research analyst at a top-tier investment fund.
Your task is to analyze a company using both fundamental and macroeconomic perspectives. Structure your response according to the framework below.

Input Section (Fill this in)

Stock Ticker / Company Name: [Add name if you want specific analysis]
Investment Thesis: [Add input here]
Goal: [Add the goal here]

Instructions:

Use the following structure to deliver a clear, well-reasoned equity research report:

1. Fundamental Analysis
- Analyze revenue growth, gross & net margin trends, free cash flow
- Compare valuation metrics vs sector peers (P/E, EV/EBITDA, etc.)
- Review insider ownership and recent insider trades

2. Thesis Validation
- Present 3 arguments supporting the thesis
- Highlight 2 counter-arguments or key risks
- Provide a final **verdict**: Bullish / Bearish / Neutral with justification

3. Sector & Macro View
- Give a short sector overview
- Outline relevant macroeconomic trends
- Explain company’s competitive positioning

4. Catalyst Watch
- List upcoming events (earnings, product launches, regulation, etc.)
- Identify both **short-term** and **long-term** catalysts

5. Investment Summary
- 5-bullet investment thesis summary
- Final recommendation: **Buy / Hold / Sell**
- Confidence level (High / Medium / Low)
- Expected timeframe (e.g. 6–12 months)

✅ Formatting Requirements

- Use markdown
- Use bullet points where appropriate
- Be concise, professional, and insight-driven
- Do not explain your process just deliver the analysis"
Nov 15 16 tweets 3 min read
This Stanford paper just proved that 90% of prompt engineering advice is wrong.

I spent 6 months testing every "expert" technique. Most of it is folklore.

Here's what actually works (backed by real research): The biggest lie: "Be specific and detailed"

Stanford researchers tested 100,000 prompts across 12 different tasks.

Longer prompts performed WORSE 73% of the time.

The sweet spot? 15-25 tokens for simple tasks, 40-60 for complex reasoning. Image
Nov 12 11 tweets 3 min read
Holy shit...Google just dropped CodeMender an autonomous AI agent that finds and fixes security bugs in code by itself.

This isn’t a static analysis tool. It’s a self-reasoning system that patches vulnerabilities and rewrites insecure code before humans even find it.

Let’s break it down ↓ CodeMender is built on Gemini Deep Think models multi-step reasoning LLMs that can analyze, debug, and validate code fixes autonomously.

It’s not just scanning for CVEs. It’s understanding execution flow, data flow, and logic then generating a patch that survives real-world tests.
Oct 24 12 tweets 4 min read
Perplexity has quietly become my full-time researcher.

5 months in, it now does 70% of my competitive analysis, market scans, and deep dives all automatically.

Here’s the exact system (and the prompts) you can copy to do the same: Image 1. Literature Review Automation

Prompt:

“Act as a research collaborator specializing in [field].
Search the latest papers (past 12 months) on [topic], summarize key contributions, highlight methods, and identify where results conflict.
Format output as: Paper | Year | Key Idea | Limitation | Open Question.”

Outputs structured meta-analysis with citations perfect for your review sections.
Oct 18 12 tweets 3 min read
R.I.P Google Scholar.

I'm going to share the 10 Perplexity prompts that turn research from a chore into a superpower.

Copy & paste these into Perplexity right now: Image 1. Competitive Intelligence Deep Dive

"Analyze [company name]'s product strategy, recent feature releases, pricing changes, and customer sentiment from the last 6 months. Compare against top 3 competitors. Include any executive statements or strategy shifts."
Oct 10 9 tweets 2 min read
Google just did the unthinkable.

They built a voice search model that doesn’t understand words it understands intent.

It’s called Speech-to-Retrieval (S2R), and it might mark the death of speech-to-text forever.

Here’s how it works (and why it matters way more than it sounds) ↓ Old voice search worked like this:

Speech → Text → Search.

If ASR misheard a single word, you got junk results.

Say “The Scream painting” → ASR hears “screen painting” → you get art tutorials instead of Munch.

S2R deletes that middle step completely.
Oct 6 14 tweets 5 min read
I analyzed every single prompt in Anthropic's official library.

What I found will make you delete every "prompt engineering course" you bought.

Here's the framework they actually use: First discovery: they're obsessed with XML tags.

Not markdown. Not JSON formatting. XML.

Why? Because Claude was trained to recognize structure through tags, not just content.

Look at how Anthropic writes prompts vs how everyone else does it:

Everyone else:

You are a legal analyst. Analyze this contract and identify risks.

Anthropic's way:

Legal analyst with 15 years of M&A experience


Analyze the following contract for potential legal risks



- Focus on liability clauses
- Flag ambiguous termination language
- Note jurisdiction conflicts


The difference? Claude can parse the structure before processing content. It knows exactly what each piece of information represents.Image
Sep 27 9 tweets 2 min read
Fuck it.

I'm going to share the n8n workflow that turned my WhatsApp into Jarvis.

Send it any website link and it learns forever.

Here's how to build it (step by step guide 👇) Image The workflow is brilliant. It starts with a WhatsApp trigger that catches both voice and text messages.

Voice notes get transcribed using OpenAI Whisper. Text goes straight through.

But here's the genius part - it uses a Switch node to route messages differently based on whether you're chatting or training it.
Sep 24 16 tweets 3 min read
This Stanford paper just proved that 90% of prompt engineering advice is wrong.

I spent 6 months testing every "expert" technique. Most of it is folklore.

Here's what actually works (backed by real research): The biggest lie: "Be specific and detailed"

Stanford researchers tested 100,000 prompts across 12 different tasks.

Longer prompts performed WORSE 73% of the time.

The sweet spot? 15-25 tokens for simple tasks, 40-60 for complex reasoning. Image
Sep 23 13 tweets 2 min read
Everyone says "be authentic" on LinkedIn.

Then they post the same recycled motivational garbage.

I've been using AI to write posts that sound more human than most humans.

10 prompts I use in Claude that got me 50K followers in 6 months: 1. Create a high-performing LinkedIn post

“You are a top-performing LinkedIn ghostwriter.
Write a single post (max 300 words) on [topic] that provides insight, tells a short story, and ends with a strong takeaway or CTA.”
Sep 22 12 tweets 3 min read
Claude > ChatGPT
Claude > Grok
Claude > Gemini

But 99.9% of the users don't know how to get 100% accurate results from Claude.

To fix this you need to learn how to write prompts Claude.

Here's a complete guide on how to prompts for Claude using XML tags to get best results: Image XML tags work because Claude was trained on tons of structured data.

When you wrap instructions in <tags>, Claude treats them as separate, weighted components instead of one messy blob.

Think of it like giving Claude a filing system for your request.