Alex Hughes Profile picture
head of growth @droxyai
3 subscribers
Dec 15, 2025 14 tweets 5 min read
WARNING: After you use these prompts, you’ll never write the same way again.

This might be the most useful thing I’ve shared all year.

Here are 12 prompts turn any LLM into a full writing studio that works harder than you do: 1/ The “Voice Injection” Prompt

Gets the model to fully absorb your writing style.

“Here are 5 samples of my writing. Extract my tone, pacing, sentence structure, and emotional signatures. Confirm when my ‘voice profile’ is ready.”

This sets the foundation. Image
Dec 1, 2025 13 tweets 5 min read
Someone gave Nano Banana Pro a simple prompt and it exposed how everyday things are really produced.

Here are 10 things that were made by Nano Banana Pro:

1. Pyramids Image 2. Ramen Image
Nov 29, 2025 12 tweets 5 min read
If you’re not using AI to train your psychological instincts, you’re behind.
These 7 Greene-based prompts will change that.

Here’s how to run them 👇 Image 1. The Power Dynamics Decoder (Law of Awareness)

"Break down the power dynamics between me and [PERSON/GROUP] in the context of [SITUATION]. Identify: 1) What they want, 2) What they fear, 3) Unspoken motives, 4) Leverage points I am missing. Give me a clear map of the terrain so I don't walk in blind."
Nov 26, 2025 13 tweets 3 min read
I shouldn’t share this.

These 10 Perplexity prompts are the ones founders pass around quietly, the ones that turn vague curiosity into an actual SaaS concept overnight.

(Bookmark this for later) Image 1. SaaS Opportunity Mapper

"Analyze unmet needs in the niche: [ENTER NICHE]. Identify 5 urgent, high-value problems and propose SaaS solutions with target users, workflows, and revenue models."
Nov 22, 2025 8 tweets 3 min read
🚨 Stop using ChatGPT and Perplexity for research.

I just spent 48 hours testing Gemini 3.0 and holy shit... it's not even close.

This thing finds connections that other models completely miss.

Here are 5 powerful ways to use Gemini 3.0 for research: 1. Investment & Startup Research

Want to invest in a startup or analyze potential unicorns? Use DeepSearch to uncover financial health, investor trends, and market positioning.

Try this prompt:

"Analyze the startup landscape in [industry]. Identify promising startups, their funding rounds, valuation trends, and investor interest. Provide actionable insights."
Nov 20, 2025 7 tweets 3 min read
Holy shit… the consulting industry is in trouble

You don’t need a $300k firm anymore.

Gemini 3.0 Pro can now run full competitive market analysis better and faster.

Here are the 3 mega-prompts I use to replicate McKinsey-level insights for free: We use these 3 mega prompts for different tasks:

1/ The Consultant Framework

Prompt: "You are a world-class strategy consultant trained by McKinsey, BCG, and Bain. Act as if you were hired to provide a $300,000 strategic analysis for a client in the [INDUSTRY] sector.

Here is your mission:

1. Analyze the current state of the [INDUSTRY] market.
2. Identify key trends, emerging threats, and disruptive innovations.
3. Map out the top 3-5 competitors and benchmark their business models, strengths, weaknesses, pricing, distribution, and brand positioning.
4. Use frameworks like SWOT, Porter’s Five Forces, and strategic value chain analysis to assess risks and opportunities.
5. Provide a one-page strategic brief with actionable insights and recommendations for a hypothetical company entering or growing in this space.

Output everything in concise bullet points or tables. Make it structured and ready to paste into slides. Think like a McKinsey partner preparing for a C-suite meeting.

Industry: [INSERT INDUSTRY OR MARKET HERE]"
Oct 11, 2025 11 tweets 3 min read
R.I.P. prompt engineering 💀

Google just turned Gemini into a full-fledged computer operator.

It can book flights, edit code, and update Google Sheets on its own.

Here’s how Gemini 2.5 Computer Use quietly ends manual prompting: Until now, most AI “agents” hit a wall.

They could write code or text, but they couldn’t use software.

Gemini 2.5 fixes that.

It literally sees your screen, reasons over UI elements, and takes actions click, type, scroll all in a feedback loop until the job’s done.

Think of it as: GPT + browser + mouse + brain.Image
Oct 10, 2025 7 tweets 4 min read
What the fuck just happened 🤯

Stanford just made fine-tuning irrelevant with a single paper.

It’s called Agentic Context Engineering (ACE) and it proves you can make models smarter without touching a single weight.

Instead of retraining, ACE evolves the context itself.

The model writes, reflects, and rewrites its own prompt over and over until it becomes a self-improving system.

Think of it like the model keeping a living notebook.
Every failure becomes a lesson. Every success becomes a rule.

And the results are absurd:

+10.6% better than GPT-4–powered agents on AppWorld
+8.6% on financial reasoning
86.9% lower cost and latency
No labels. Just feedback.

Everyone’s obsessed with “short, clean” prompts.

ACE flips that. It builds dense, evolving playbooks that compound over time and never forget.

Because LLMs don’t crave simplicity.

They crave context density.

If this scales, the next generation of AI won’t be fine-tuned.
It’ll be self-tuned.

We’re entering the era of living prompts.Image Here’s how ACE works 👇

It splits the model’s brain into 3 roles:

Generator - runs the task
Reflector - critiques what went right or wrong
Curator - updates the context with only what matters

Each loop adds delta updates small context changes that never overwrite old knowledge.

It’s literally the first agent framework that grows its own prompt.Image
Oct 4, 2025 9 tweets 3 min read
HUGE news from Microsoft: they just open-sourced a production-ready agent framework that actually works.

Agent Framework is Semantic Kernel + AutoGen in one SDK.

Every AI engineer needs to see this.

Here's everything you need to know: The problem: existing frameworks forced a tradeoff.

Semantic Kernel = stable, enterprise-ready, but rigid.

AutoGen = experimental, multi-agent magic, but zero observability.

Developers kept asking: "Why can't we have both?"

Microsoft Agent Framework is the answer. Image
Sep 27, 2025 12 tweets 4 min read
Holy shit... Cloudflare just open-sourced an entire AI coding platform that lets anyone build and deploy apps with natural language.

VibeSDK is basically Replit/Cursor but you can deploy your own version in one click.

This changes everything.

Here's how it works: Image Here's what just happened:

You can now spin up your own "vibe coding" platform where users describe what they want and AI builds it.

- Complete development environment
- Isolated sandboxes for every user
- Auto-deployment to Cloudflare's global network
- Built-in caching and observability

All open source. All free.Image
Sep 25, 2025 17 tweets 4 min read
Holy shit...

Stanford just published research that destroys every prompt engineering guru.

Turns out most "advanced techniques" are just survivorship bias and confirmation bias.

Here's what the data actually shows: The biggest lie: "Be specific and detailed"

Stanford researchers tested 100,000 prompts across 12 different tasks.

Longer prompts performed WORSE 73% of the time.

The sweet spot? 15-25 tokens for simple tasks, 40-60 for complex reasoning. Image
Sep 24, 2025 14 tweets 4 min read
Everyone's using ChatGPT, Claude, and Gemini.

But 99% are stuck in beginner mode.

Here are 5 advanced learning prompts (with copy-paste templates) that turn any AI into your personal Harvard professor: 1. The Skill Breakdown

Prompt:

"I want to learn [skill]. Break this down into 5 progressive levels from beginner to advanced. For each level, tell me:

Core concepts to master
Practical exercises to try
How I'll know I'm ready for the next level
Estimated time to complete

Make it actionable, not theoretical."
Sep 23, 2025 16 tweets 5 min read
Prompt engineering "experts" are teaching you wrong.

They overcomplicate what's actually dead simple.

I reverse-engineered how the best AI researchers actually prompt.

Here's the complete guide you can follow to become a pro AI user: WEEK 1-2: STOP BEING VAGUE

Bad prompt: "Help me with marketing"

Good prompt: "Write 5 email subject lines for a project management SaaS launching to small business owners. Make them curiosity-driven and under 50 characters."

See the difference? Specific request, clear audience, exact format. Practice this for 30 minutes daily.
Sep 22, 2025 14 tweets 4 min read
OpenAI engineers don't prompt like everyone else.

I reverse-engineered their internal techniques from leaked docs and demos.

The difference is insane.

Here are the 5 insider methods they don't want you to know: 1. Role Assignment

Don't just ask questions. Give the AI a specific role first.

❌ Bad: "How do I price my SaaS?"

✅ Good: "You're a SaaS pricing strategist who's worked with 100+ B2B companies. How should I price my project management tool?"

The AI immediately shifts into expert mode.
Sep 17, 2025 14 tweets 4 min read
Scientists just published the most rigorous test of AI consciousness ever run.

They pushed Claude through experiments that most ethics boards wouldn’t even approve for humans.

What they found flips our understanding of “consciousness.”

Here's the breakdown: Image Virtual world test:

They built a 4-room virtual environment where Claude could freely explore different types of content. Each room contained 20 letters with different themes.

The question: would Claude show genuine preferences or just random behavior? Image
Sep 10, 2025 13 tweets 4 min read
MIT just dropped a report that will define the next decade of business.

"The GenAI Divide" reveals which companies will survive AI disruption.

The data is absolutely brutal for most businesses.

Here's what every leader needs to know → Image MIT analyzed over 300 AI projects, interviewed 52 organizations, and surveyed 153 senior leaders.

The result?

→ 95% of enterprise AI implementations are failing.
→ Only about 5% of pilots reach production and deliver measurable P&L impact.

Adoption ≠ Transformation. Image
Sep 2, 2025 14 tweets 3 min read
MIT researchers just found out:

99% of people are prompting wrong.

They throw random words at AI and hope for magic.

Here’s how to actually get consistent, high-quality outputs: There are 3 main ways to prompt:

👉 Zero-shot
👉 Few-shot
👉 Chain-of-thought

Each works in different scenarios.

Get this wrong, and your outputs will always be shaky. Image
Sep 1, 2025 10 tweets 3 min read
Context length is the most important AI concept nobody explains.

It’s literally why your chatbot “forgets.”

Here’s the concept explained in plain English 👇 Image Every Large Language Model (LLM) has a token limit.

A token = a chunk of text (≈ 3–4 characters of English).

Think of it as the AI’s working memory.

If you exceed it, the model starts dropping information.

Example:

- GPT-4o has ~128k tokens (~300 pages of text).
- Claude 3.5 Sonnet has 200k tokens (~500 pages).
- Gemini 1.5 Pro: 1M+ tokens (~3,000 pages).

But no model has “infinite memory.”Image
Aug 31, 2025 11 tweets 2 min read
Holy sh*t… Claude with XML is a different beast.

Anthropic researchers quietly dropped the framework — and no one’s talking about it.

It’s like switching from calculator to supercomputer.

Here’s the structure they didn’t put in the docs: Why XML?

Claude was trained on structured, XML-heavy data like documentation, code, and datasets.

So when you use XML tags in your prompts, you’re literally speaking its native language.

The result? Sharper, cleaner, and more controllable outputs.

(Anthropic says that XML tag prompts gets best results)Image
Aug 30, 2025 7 tweets 3 min read
You don’t need courses anymore.

Google Gemini now has 'Guided Learning' a full AI-powered tutor that explains, tests, and checks your understanding.

Here’s how it works (and why it's a game changer): 1. How to get in:

• Go to
• Start a new chat
• Choose Guided Learning
• Ask a question or upload a PDF/notes
• Turn them into a lesson with practice. gemini.google.com
Aug 27, 2025 13 tweets 4 min read
The top 1% of AI users get 10x better results with the same models as everyone else.

Their secret? They mastered the skill everyone ignores: prompting.

MIT proved it drives 50% of performance.

The skill that changes everything ↓ Image When people upgrade to more powerful AI, they expect better results.

And yes, newer models do perform better.

But this study found a twist:

Only half the quality jump came from the model.

The rest came from how users adapted their prompts.