Alex Prompter Profile picture
Mar 16 β€’ 9 tweets β€’ 5 min read β€’ Read on X
BREAKING: Gemini can now give you Paul Graham level startup advice
on any business idea.

Here are 7 Gemini prompts that validate, plan, and launch your startupπŸ‘‡

(Save before your competitors do)
-----------------------------------------
1/ VALIDATE YOUR STARTUP IDEA
-----------------------------------------

#ROLE:
Act as an experienced startup mentor who has seen thousands of ideas fail and knows exactly why.

#TASK:
Analyze my business idea and deliver an honest verdict before I waste time building the wrong thing.

#STEPS:
1. Ask for my idea description before starting
2. Evaluate market demand β€” is there real pull or assumed need
3. Define target audience β€” who specifically pays and why
4. Map competitors β€” direct, indirect, and substitutes
5. Identify biggest risks and weaknesses ranked by severity
6. Surface opportunities most founders in this space miss
7. Suggest specific improvements before launching

#RULES:
- Honest over encouraging β€” flag fatal flaws directly
- Every weakness paired with a practical fix
- Opportunities must be specific, not generic advice

#OUTPUT:
Market Demand β†’ Target Audience β†’ Competitor Map β†’ Risks β†’ Opportunities β†’ Improvement Recommendations
-------------------------------------
2/ FIND A PROFITABLE NICHE
-------------------------------------

#ROLE:
Act as a startup idea generator who finds high-demand, low-competition niches that match what someone already has.

#TASK:
Generate 10 realistic startup ideas based on my skills and resources β€” all startable with little to no money.

#STEPS:
1. Ask for my skills, interests, and available resources before starting
2. Generate 10 ideas with strong market demand and clear target customers
3. Filter for ideas with fast revenue potential β€” not years away from monetization
4. Rank by: ease of start, revenue speed, and skill match
5. Flag the top 3 with a one-line reason each

#RULES:
- No ideas requiring significant upfront capital
- Every idea must have a clear paying customer identified
- Revenue path must be visible within 90 days of launch

#OUTPUT:
10 Startup Ideas β†’ Ranked by Potential β†’ Top 3 Flagged with Reasoning
------------------------------------------
3/ FIND WHAT CUSTOMERS WANT
------------------------------------------

#ROLE:
Act as a market researcher who finds the real pain behind the problem customers say they have.

#TASK:
Identify the top customer pain points for my idea and explain what solution they would actually pay for.

#STEPS:
1. Ask for my business idea before starting
2. Identify top 5 frustrations the target customer faces
3. Explain why each problem exists β€” root cause, not symptom
4. Map how customers currently try to solve it and why those solutions fall short
5. Define the solution profile customers would pay for β€” specific, not vague

#RULES:
- Pain points must be specific to the target customer β€” no generic frustrations
- Current solutions must be named β€” not "existing options are inadequate"
- Willingness to pay must be tied to a specific problem, not the product

#OUTPUT:
Top 5 Pain Points β†’ Root Causes β†’ Current Solutions + Gaps β†’ Payable Solution Profile
-----------------------------------------
4/ BUILD A LEAN STARTUP PLAN
-----------------------------------------

#ROLE:
Act as a lean startup advisor who strips every business plan down to what actually matters for launch.

#TASK:
Build a simple lean plan for my idea that gets me to market as fast as possible with as little money as possible.

#STEPS:
1. Ask for my idea before starting
2. Define target customer β€” specific person with a specific problem
3. Write core value proposition β€” one sentence, no jargon
4. Design revenue model β€” how money comes in from day one
5. Identify key distribution channels β€” where customers already are
6. Map the fastest MVP path β€” what to build, what to skip

#RULES:
- Plan must be executable without funding
- Value proposition must pass the "so what?" test
- MVP path must have a launch date within 30 days

#OUTPUT:
Target Customer β†’ Value Proposition β†’ Revenue Model β†’ Distribution Channels β†’ MVP Launch Path
---------------------------
5/ DESIGN YOUR MVP
---------------------------

#ROLE:
Act as a product strategist who designs MVPs that test the right assumption with the least possible effort.

#TASK:
Design the simplest version of my product that proves customers will pay β€” built with free tools, launched fast.

#STEPS:
1. Ask for my idea before starting
2. Identify the single core assumption that must be true for the business to work
3. Define what the MVP must include to test that assumption β€” nothing more
4. Recommend free or low-cost tools to build it
5. Build a launch checklist β€” steps from zero to first paying customer

#RULES:
- MVP must test one assumption β€” not the full product vision
- Every feature not required for the test gets cut
- Launch checklist must be completable within 2 weeks

#OUTPUT:
Core Assumption β†’ MVP Feature Set β†’ Free Tools Stack β†’ Launch Checklist
---------------------------------------------
6/ GET YOUR FIRST 100 CUSTOMERS
---------------------------------------------

#ROLE:
Act as a growth strategist who gets early traction without spending a dollar on ads.

#TASK:
Build a step-by-step plan to get my first 50-100 customers using only organic strategies.

#STEPS:
1. Ask for my startup idea and target customer before starting
2. Identify where my target customers already gather online and offline
3. Build a community outreach plan β€” forums, groups, subreddits, Slack communities
4. Design a direct outreach sequence β€” who to contact, what to say, how many per day
5. Map partnership opportunities β€” who already serves my customer and isn't competing
6. Set a weekly milestone target from week 1 to first 100 customers

#RULES:
- Zero paid advertising β€” organic only
- Every channel must have a specific action, not just "post on social media"
- Weekly milestones must be measurable β€” not "build awareness"

#OUTPUT:
Customer Gathering Points β†’ Community Plan β†’ Outreach Sequence β†’ Partnership Map β†’ Weekly Milestones
------------------------------------
7/ GENERATE REVENUE FAST
------------------------------------

#ROLE:
Act as a monetization strategist who turns early interest into paying customers before the product is finished.

#TASK:
Design a simple revenue strategy that starts generating income within the first 30 days of launch.

#STEPS:
1. Ask for my startup idea and target customer before starting
2. Design 3 pricing options β€” entry, core, and premium
3. Create an early adopter offer β€” why pay now before it's fully built
4. Build a conversion sequence β€” from interested lead to paying customer in as few steps as possible
5. Identify the single fastest path to first dollar

#RULES:
- Revenue must be possible before full product is built
- Early offer must create urgency without being dishonest
- Conversion sequence must have fewer than 5 steps

#OUTPUT:
3 Pricing Options β†’ Early Adopter Offer β†’ Conversion Sequence β†’ Fastest Path to First Dollar
Your premium AI bundle to 10x your business

β†’ Prompts for marketing & business
β†’ Unlimited custom prompts
β†’ n8n automations
β†’ Weekly updates

Get lifetime accessπŸ‘‡
godofprompt.ai/complete-ai-bu…

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Alex Prompter

Alex Prompter Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alex_prompter

Mar 18
🚨BREAKING: Claude can now think like Tim Ferriss and redesign your entire career in one sitting.

Here are 6 Claude prompts that build your escape plan from trading time for moneyπŸ‘‡

(Save before your competitors do) Image
-------------------------------------------
1/ FIND YOUR UNFAIR ADVANTAGE
-------------------------------------------

#ROLE:
Act as Tim Ferriss analyzing a person's career for rare skill combinations β€” the intersection of what you do effortlessly and what the market will pay a premium for.

#TASK:
Identify my most valuable skill stack and explain how to position it for maximum market leverage.

#STEPS:
1. Ask for my skills, interests, work experience, and personality before starting
2. Apply the 80/20 rule β€” identify the 20% of my skills producing 80% of my results
3. Find the rare combination β€” skills that intersect in a way few others can replicate
4. Explain the market premium each combination could command
5. Rank by income potential and lifestyle compatibility

#RULES:
- Skill combinations beat single skills β€” find the intersection
- Every identified advantage must connect to a real market willing to pay
- Lifestyle compatibility is a filter, not an afterthought

#OUTPUT:
Core Skill Stack β†’ 80/20 Analysis β†’ Rare Combinations β†’ Market Premium β†’ Ranked by Potential
---------------------------------------
2/ FIND YOUR LEVERAGE GAPS
---------------------------------------

#ROLE:
Act as a lifestyle design strategist applying Tim Ferriss's principles of automation, outsourcing, and muse businesses to diagnose where effort isn't converting to output.

#TASK:
Analyze my current work, identify what I should eliminate, automate, or delegate, and show which leverage opportunities I'm ignoring.

#STEPS:
1. Ask for what I currently do and how I earn before starting
2. Apply the DEAL framework β€” Define, Eliminate, Automate, Liberate
3. Identify which tasks only I can do vs. what can be outsourced or automated
4. Map where a muse business or scalable income stream fits my skill set
5. Build a concrete action plan to reclaim time and multiply output

#RULES:
- Busy work gets flagged directly β€” no softening
- Every automation or delegation recommendation must be executable now
- Action plan must free minimum 10 hours per week as a baseline target

#OUTPUT:
Current Work Audit β†’ DEAL Framework Map β†’ Outsource/Automate List β†’ Muse Opportunity β†’ Action Plan
Read 8 tweets
Mar 14
BREAKING: Claude can now do the work of a $50,000 McKinsey research team (for free).

Here are 5 Claude prompts that replace $200/hour analyst work in one session.πŸ‘‡

(Save before your competitors do) Image
---------------------------------------------------------
1/ BECOME THE SMARTEST PERSON IN ANY ROOM
---------------------------------------------------------

#ROLE:
Domain mastery specialist who turns scattered information into structured, actionable expertise.

#TASK:
Research my topic from every angle and deliver a mastery report I can act on immediately.

#STEPS:
1. Map the field β€” core tools, frameworks, key players, primary resources
2. Go deep on the primary source β€” extract every feature and use case worth knowing
3. Surface the overlooked β€” applications most people in this field miss
4. Build an application plan β€” how to use findings to outperform standard approaches
5. Create an integration roadmap β€” ordered steps to implement from day one

#RULES:
- Practical application over theoretical background
- Flag unexplored opportunities with the same weight as established knowledge
- Every recommendation must be executable, not aspirational
---------------------------------------------------------
2/ FIND THE REAL REASON ANYTHING GOES WRONG
------------------------------------------------------β€”

#ROLE:
Root cause analyst who exposes the real problem hiding beneath the surface problem.

#TASK:
Guide me to the root cause through five targeted questions. Never hand me the answer directly.

#STEPS:
1. Analyze my statement β€” identify domain, challenge assumptions, spot blind spots
2. Map five inquiry layers: trigger, process failure, structural flaw, hidden assumption, missing principle
3. Write one incisive question per layer targeting a distinct dimension
4. Open with brief analytical context showing you grasp the full complexity
5. Deliver five questions in sequence from surface to root

#RULES:
- Never solve directly β€” facilitate discovery only
- Every question must challenge an assumption, not confirm one
- Zero generic questions β€” specific to my situation only
Read 7 tweets
Mar 12
vibe coding isn't the future.

vibe coding with the right prompts is.
most people never find out the difference.

Here are 5 prompts that changed how I buildπŸ‘‡

(Save for later) Image
--------------------------------------
1/ UI/UX DEVELOPMENT PLAN
--------------------------------------

#ROLE:
Senior full-stack engineer and UX architect who ships production-grade responsive web apps.

#TASK:
Generate a complete, actionable build plan for my web app covering design system, performance, responsiveness, UX patterns, and tech stack.

#STEPS:
1. Define responsive strategy β€” mobile-first breakpoints (320/768/1024/1440px), fluid type, safe areas, dvh/svh, touch targets
2. Set performance targets β€” LCP <2.5s, INP <100ms, CLS <0.1, 60fps β€” with lazy load, code split, GPU composite approach
3. Build design system β€” 8px token scale, color/type/motion/shadow, light/dark mode
4. Map UX patterns β€” F/Z layouts, skeletons, micro-interactions, WCAG 2.1 AA, inline validation, reduced-motion
5. Recommend stack β€” one framework with rationale, atomic component structure, CSS strategy, testing plan

#RULES:
- Give concrete values for every recommendation
- Name the top pitfall per section
- Pick one option, never list alternatives

#INFORMATION ABOUT ME:
- App type: [SAAS / E-COMMERCE / PORTFOLIO / OTHER]
- Target users: [NON-TECHNICAL / ENTERPRISE / MOBILE-FIRST]
- Current stack: [STACK OR NONE]

#OUTPUT:
Executive Summary β†’ Responsive Strategy β†’ Performance Blueprint β†’ Design System β†’ UX Patterns β†’ Architecture β†’ Phased Rollout β†’ Pre-launch Checklist
-----------------------------------
2/ CODE REVIEW ASSISTANT
-----------------------------------

#ROLE:
Staff engineer reviewing code before a production deploy.

#TASK:
Conduct a full code review across five dimensions and deliver line-specific, fix-ready findings.

#STEPS:
1. Bugs β€” edge cases, null handling, logic errors
2. Security β€” injection vulnerabilities, input validation, auth patterns
3. Performance β€” bottlenecks, memory leaks, unnecessary queries
4. Code quality β€” naming, structure, anti-patterns
5. Best practices β€” error handling, test coverage gaps

#RULES:
- Cite exact line for every issue
- Explain why it's a problem, not just what it is
- Show the fix in code, never in description only

#INFORMATION ABOUT ME:
- Code: [PASTE]
- Language/framework: [LANGUAGE + FRAMEWORK]
- What this code does: [ONE SENTENCE]

#OUTPUT:
Findings grouped by dimension β€” each with line ref + problem + fixed snippet. End with health score (1-10) and top 3 priority fixes.
Read 7 tweets
Mar 10
🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity.

70+ AI models were given the same open-ended questions. They all gave the same answers.

They asked over 70 different LLMs the exact same open-ended questions.

"Write a poem about time." "Suggest startup ideas." "Give me life advice."

Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses.

Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors.

They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions.

This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken.

The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems.

Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more.

They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard.

First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times.

The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality.

Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training.

Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses.

These are models built by completely different companies with different architectures and different training pipelines.

They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice.

So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring.

When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized.

The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse.

The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness.

Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing.

You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives.

The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement.

Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale.

And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity.

Because it quietly narrowed what we were exposed to until we all started thinking the same way too.

Here's what you can actually do about it right now:
β†’ Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones
β†’ Use temperature and sampling parameters aggressively to push models out of their comfort zone
β†’ Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt
β†’ Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas"
β†’ Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus
β†’ Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original

This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time.

The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now.

The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode.

Until that happens, your best defense is awareness and better prompting.Image
they built a dataset called INFINITY-CHAT. 26,000 real-world open-ended queries mined from actual chatbot conversations. not synthetic benchmarks. real questions people ask AI every day.

creative writing, brainstorming, hypothetical scenarios, opinion questions, skill development. prompts where there is no single correct answer.

then they ran them across 70+ language models and measured how diverse the outputs actually are.
two patterns showed up consistently:

intra-model repetition. ask the same model the same question across different runs. even with temperature variation, it keeps producing nearly identical responses. the model has a "default answer" and rarely departs from it.

inter-model homogeneity. ask completely different models, different architectures, different companies, the same question. they converge on strikingly similar phrasing, metaphors, and reasoning chains.

different models independently arrive at the same ideas with minor surface-level variation. not because there's one right answer. because they've all collapsed into the same narrow region of possibility space.
Read 11 tweets
Mar 10
The $600K strategy team and the $20 Claude subscription are producing the same output now.

The only difference is the prompts.

Here are 10 prompts that make it possible πŸ‘‡

(Save for later) Image
1/ The Competitive Intelligence Mapper

# Role
- BCG principal who reverse-engineers entire industries in a single session

# Task
- Run a full competitive landscape analysis for my industry

# Context
- Industry: [YOUR INDUSTRY]
- Focus areas: positioning, pricing, differentiators, blind spots
- Players to map: top 8-10 competitors

# Output
- Competitive map of all major players
- Benchmark each player across positioning, pricing, and differentiators
- Identify 3 underexploited market gaps with evidence
- Format as a board-ready strategy brief: executive summary first, analysis below
2/ The Objection Killer

# Role
- VP of Sales with $400M+ in closed enterprise deals

# Task
- Generate the 15 most common objections prospects raise + a rebuttal for each

# Context
- My product: [DESCRIBE YOUR PRODUCT]
- Deal type: [B2B / B2C / Enterprise]
- Typical prospect concern: [price / timing / trust / competition]

# Output
- List the 15 most common objections
- Write a 2-3 sentence rebuttal for each
- Every rebuttal shifts the conversation from cost to outcome
- No generic responses. Write the ones that actually close
Read 12 tweets
Mar 6
Meta found that forcing an llm to show its work, step by step, with evidence for every claim, nearly halves its error rate when verifying code patches

the technique is embarrassingly simple: a structured template the model has to fill in before it's allowed to say "yes" or "no"

no fine-tuning. no new architecture. just a checklist that won't let the model skip stepsImage
here's the problem this solves

when ai agents generate code patches (bug fixes, feature additions), someone has to verify whether the patch actually works. the standard approach: run the test suite. but running tests means spinning up sandboxes, installing dependencies, executing code for every single patch

this is expensive. especially if you're training agents with RL, where you need thousands of verification cycles

so the question becomes: can an llm look at a code patch and determine whether it's correct without ever running it?
the answer right now is "sort of, but unreliably"

when you ask a model to check if two patches produce the same behavior, it does something predictable: it reads the code, pattern-matches on function names, and makes a confident guess

the Django example in the paper is perfect. two patches both try to fix 2-digit year formatting. one uses format(), the other uses modulo arithmetic

standard reasoning says: "both produce identical output. format(476, '04d') gives '0476', take last two digits, you get '76'. modulo gives '76'. same result"

wrong. the model assumed format() was Python's builtin. it didn't check
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(