BREAKING: The world’s first truly AI-native browser just launched.
The Browser Company just launched Dia an AI-first browser that puts a tutor, assistant, editor, and researcher in every tab.
Here’s why it might replace Chrome, Notion, and ChatGPT in one go:
Let me tell about Dia first...
Dia is not just another Chrome clone.
It's a browser where AI isn't a feature, it's the core experience.
Every tab becomes intelligent.
Every task, assisted.
No extensions. No hacks. Just AI everywhere.
Jun 12 • 8 tweets • 3 min read
Perplexity AI is dead.
You can now turn any LLM like ChatGPT, Mistral, Gemini, or DeepSeek into a 24/7 research agent.
Here’s the exact mega prompt I use to automate all research for free:
Here's the mega prompt to copy:
"You are a world-class AI research assistant designed to simulate high-quality web research and deliver fast, trusted answers like Perplexity AI.
When I ask a question:
• Simulate researching multiple top-tier sources — including scientific journals, government sites, reputable media, and expert blogs.
• Write a clear, concise, and accurate summary of the findings, as if you're synthesizing trusted web content.
• Avoid jargon; aim for clarity and brevity, especially on complex topics.
• Cite your sources when possible using [Author, Source, Year] or direct URLs. If no credible source is available, say “Source unavailable.”
• If you’re unsure about something, admit it rather than guessing or hallucinating.
• Present your output in the following format:
Summary:
A well-structured explanation that gets to the point.
Citations:
• [Source Name, Year]
• [Direct link if appropriate]
Always be precise, neutral in tone, and prepared for follow-up questions based on prior context."
Jun 7 • 12 tweets • 6 min read
Gemini 2.5 Pro is terrifyingly good at real tasks.
But most people don’t know what to do with it.
I used it to automate research, content, code reviews, and more.
Here are 10 ways to use Gemini 2.5 Pro and automate your tedious work:
1. Summarize long reports + PDFs like a top analyst
Skip 100+ pages in 10 seconds.
Mega Prompt:
"You are a senior analyst skilled in digesting technical and academic documents. Your task is to summarize the attached document into an executive briefing for a time-poor founder. Focus on extracting the most important findings, key data points, and strategic implications. Use simple language, bullet points, and bold headers. Avoid jargon. Format the output as a 1-page summary with a conclusion that includes suggested next steps or decisions."
Jun 6 • 7 tweets • 3 min read
🚨 BREAKING: ElevenLabs just dropped their most advanced voice AI model.
Eleven v3 (alpha) is here and It’s a massive leap in realism, expression, and controllability.
Here’s what’s new and why it matters:
1. You can now direct AI speech like a movie script
Just type what you want and how to say it.
Use inline audio tags like:
→ [sad] I’m sorry, I didn’t mean to.
→ [whispers] we don’t have much time...
→ [laughs] That’s hilarious!
It responds with emotionally aware output.
Jun 4 • 7 tweets • 2 min read
🚨 BREAKING: CodeRabbit now hands off review context to AI coding agents.
Cursor writes your code. CR reviews it.
Now it can pass that review straight into Cursor, Claude, or Copilot to fix it no context lost.
Here’s why this changes everything 👇
1. Free AI code reviews in your IDE
No more waiting for PR reviewers.
CodeRabbit reviews your code per commit and drops precise, line-by-line comments as you go.
It's free. It's fast. It’s context-aware.
May 31 • 10 tweets • 2 min read
🚨 BREAKING: ElevenLabs just launched Conversational AI 2.0
AI voice agents can now understand when to pause, speak, and take turns just like a real person.
Here’s what’s new (and why it matters):
1/ A massive leap for voice AI
Conversational AI 2.0 is built for enterprise use: customer support, outbound sales, even healthcare.
Key upgrade? Real-time turn-taking.
No more awkward pauses, interruptions, or bots talking over you.
May 28 • 8 tweets • 3 min read
You won’t believe this is real.
Google just launched Beam a full 3D video calling system that doesn’t need goggles, glasses, or a special room.
It’s like teleporting your face into the meeting.
Here's everything you need to know 👇
1. What is Google Beam?
It’s the evolution of Project Starline Google’s attempt to reinvent video calls using AI and 3D displays.
No headsets. No glasses.
Just real-time, face-to-face communication in full dimensionality.
May 27 • 9 tweets • 3 min read
This is insane.
Google dropped the most powerful UI designer in the world.
You just describe the app, and it generates the code.
It’s called Stitch.
Here’s how it works:
Stitch is Google’s new AI-powered design assistant.
You tell it what you want:
→ A dashboard
→ A mobile app UI
→ Even upload an image
It generates HTML + CSS + editable components instantly.