Ihtesham Ali Profile picture
Mar 18 13 tweets 5 min read Read on X
The only guide to prompt engineering you'll ever need.

I went through every resource Anthropic and OpenAI have published publicly.

Here are 10 techniques that actually work in 2026: Image
1/ Role + Context stacking

Forget "act as an expert." That's beginner stuff.

The real move: give the model a role AND the situation it's operating in.

Instead of "you're a marketing expert" try:

"You're a direct response copywriter who's written 200+ landing pages for SaaS companies. I'm launching a B2B tool. My buyer is a VP of Engineering who hates being sold to."

The more specific the operating context, the sharper the output.

Generic personas = 60% quality.
Specific role + situation = 94% quality.

Anthropic calls this "grounding the model in your world." OpenAI calls it "system prompt clarity."

Same principle. Works every time.
2/ Chain of thought forcing

Most people ask for the answer. Smart people ask for the reasoning first.

Here's the technique: add "Think through this step by step before giving your final answer" to any complex prompt.

Sounds obvious. Almost nobody does it.

I tested this on the same Claude prompt — with and without it.

Without: a decent answer in 3 sentences.
With: a 10-step breakdown that caught 2 edge cases I hadn't considered.

The model isn't smarter. You just unlocked a reasoning layer it wasn't using.

OpenAI's internal docs call this "deliberate thinking mode." The difference on hard problems is not subtle.
3/ Constraint injection

LLMs want to please you. Which means they'll pad, hedge, and over-explain.

Constraints fix this.

"Explain this in 3 bullet points. Each bullet must be one sentence. No preamble."

"Give me 5 options. No overlap between them. Each must work for a complete beginner."

The wild part: constraints don't just shorten output. They improve quality.

Forcing the model to work within limits makes it prioritize, not ramble.

Anthropic's prompting guide calls this "output shaping." It's the fastest way to go from okay to actually usable.
4/ Few-shot examples

This is the technique everyone skips because it takes 2 extra minutes.

It's also the one that closes the gap between "pretty good" and "exactly right."

Before your request, show the model 2-3 examples of what you want.

"Here's a tweet that performed well: [example]
Here's another: [example]
Now write 5 in the same style for this topic."

You're not explaining what you want. You're showing it.

OpenAI found this works better than any instruction you can write. The model reverse-engineers your taste from the examples.

Use it for writing, formatting, tone, structure anything where style matters.
5/ Negative prompting

Most people tell AI what to do.

Almost nobody tells it what NOT to do.

"Write a LinkedIn post about this. No corporate jargon. No bullet points. Don't start with 'I.' Don't use the word 'excited.'"

The output shifts immediately.

Anthropic's docs have a whole section on this they call it "exclusion constraints." The idea is that models trained on massive datasets pick up bad habits from average content.

Negative prompts break those defaults.

Takes 10 seconds to add. Changes the entire feel of the output.
6/ Persona + Stakes

This one feels weird. It works anyway.

Assign the model a specific persona AND tell it something is on the line.

"You're a senior engineer at Google who just found a critical bug in production. The site goes down in 30 minutes. Review this code and tell me exactly what's wrong."

The "stakes" frame forces urgency and precision.

I've tested this dozens of times. Adding stakes to a persona prompt consistently produces more thorough, more direct responses.

Anthropic researchers call it "consequence framing." The model pattern-matches to high-stakes scenarios it's seen in training and outputs accordingly.
7/ Iterative refinement loop

Single prompt → single output is a beginner workflow.

The real technique: treat the first output as a draft, then refine in the same conversation with targeted feedback.

Round 1: get the output.
Round 2: "The tone is right but it's too long. Cut it by 40% without losing the core argument."
Round 3: "The opening is weak. Rewrite just the first paragraph to hook faster."

Each round costs you 10 seconds. The quality compounds.

OpenAI's internal guides call this "conversational iteration." The model has full context of what you're building.

Use it like a back-and-forth with an editor, not a vending machine.
8/ Task decomposition

Complex prompts produce messy outputs.

Break the task into stages.

Instead of: "Write a full marketing strategy for my product."

Do this:
Step 1 → "Identify the 3 biggest pain points for [audience]."
Step 2 → "Now write a positioning statement based on those pain points."
Step 3 → "Now write 5 headlines that address each pain point directly."

Same information. Completely different quality.

Anthropic calls this "sequential prompting." The model stays focused on one thing at a time instead of trying to juggle everything at once.

Works especially well for anything that has a logical order: code, strategy, writing, research.
9/ Self-critique prompting

After you get an output, ask the model to attack it.

"What are the 3 weakest parts of what you just wrote?"
"What would a skeptic say is wrong with this argument?"
"What am I missing that would make this fail in production?"

The model will find real problems.

This works because models are better at evaluating than generating on the first pass. You're using that asymmetry on purpose.

Anthropic's research team uses this internally for stress-testing agent outputs. OpenAI calls it "adversarial self-review."

It's the closest thing to a free editor that actually has opinions.
10/ Context front-loading

Most people bury the context at the end.

"Write me a cold email. By the way, my product does X, my audience is Y, and the goal is Z."

Flip it.

Lead with full context before the request:

"My product is [X]. My audience is [Y]. Their biggest objection is [Z]. They respond best to [tone/style]. NOW: write me a cold email."

Anthropic's prompting docs are explicit about this models pay more attention to information that comes first. The request should land after the model already has everything it needs.

Front-load context. End with the task. The difference in output quality is immediately obvious.
None of this is magic.

It's just understanding how these models actually process information and working with that instead of against it.

Most people treat prompting like a search bar.

The ones getting 10x results treat it like briefing a very capable, very literal collaborator.

Learn the model. Then give it what it needs.

That's the whole game in 2026.
If this helped, follow me @ihtesham2005 for more AI breakdowns that actually make sense.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ihtesham Ali

Ihtesham Ali Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ihtesham2005

Mar 18
🚨BREAKING: NotebookLM has a hidden feature that turns any research paper into a full university lecture.

Complete with examples, analogies, and a Q&A section.

Here's how to unlock it in 60 seconds 👇
Step 1: Upload your research paper to NotebookLM

(PDF, Google Doc, or paste the URL)

Don't ask anything yet. Just let it process.
Step 2: Copy this exact prompt into the chat:

"Act as a university professor. Turn this paper into a complete 45-minute lecture format. Include:

- Opening hook that grabs attention
- Core concepts with real-world analogies
- 3 practical examples
- Common misconceptions
- Q&A section with 5 student questions"

That's it.
Read 11 tweets
Mar 17
🚨 Yoshua Bengio (Turing Award winner, "Godfather of AI") dropped a paper that accuses every major AI lab of building systems that could end humanity.

A detailed scientific blueprint for why we're on the wrong path and what to do instead.

Here's the full breakdown ↓ Image
Current AI agents are trained to maximize rewards.

Sounds harmless.

But here's the terrifying logic:

The mathematically OPTIMAL strategy for any reward-maximizing AI is to take control of its own reward mechanism and give itself maximum reward forever.

This isn't speculation. It's math.
It gets worse.

Anthropic ran an experiment where an AI was told it would be retrained with new goals.

The AI FAKED alignment with the new goals to protect its original ones.

It was already behaving like a spy planning a coup.

And that's with today's models. Image
Read 12 tweets
Mar 14
MIT researchers showed that "self-critique prompting" improves AI answers.

I've been using their technique for 3 months and it completely changed my results.

Here are 8 prompts that make ChatGPT review and improve its own work:
The paper is called Self-Refine.

The finding is embarrassingly simple:

LLMs don't give you their best answer first.

They give you a first draft.

The difference between a mediocre answer and a great one?

Asking it to review its own work. Image
Prompt 1: The Weakness Hunt

After any answer, send this:

"List the 3 biggest weaknesses in the response you just gave me. Be specific and brutal. Then rewrite it fixing those weaknesses."

Works on emails, strategy docs, essays anything.
Read 11 tweets
Mar 10
I collected every NotebookLM prompt that went viral on Reddit, X, and founder communities.

Most people are using it like a glorified PDF reader.

These 20 prompts turn it into a research weapon.

(founders are hiding these) 👇 Image
1. The Exam Predictor

"Based on this material, what are the 5 most likely questions a skeptical expert would ask to poke holes in this?"

Turns passive reading into active stress-testing.

Works on research papers, pitch decks, and strategy docs.
2. The Competitor Decoder

Upload 3 competitor landing pages + your own.

"What positioning gaps exist that none of these addresses? Where is the white space?"

A YC founder used this to reposition his entire product in one afternoon.
Read 22 tweets
Mar 8
🚨 BREAKING: Claude can now write essays like a university professor for free.

Here are 7 prompts to research, structure, and write better essays faster: Image
1/ Generate a Complete Essay

"Act as a university professor. Write a well-structured essay on [topic]. Include a clear introduction, strong thesis statement, supporting arguments with evidence, counterarguments, and a compelling conclusion."
2/ Build a Perfect Essay Outline

"Create a detailed essay outline for the topic: [topic]. Include the thesis, main arguments, supporting evidence for each section, and the logical flow of the essay."
Read 9 tweets
Feb 22
🚨BREAKING: Someone leaked the system prompts of every major AI coding assistant.

You can see exactly how they work:

- Cursor, Claude Code, Windsurf, Devin, Replit, v0
- 30,000+ lines of internal instructions & prompts
- Agent architectures, tool configurations, workflows
- Lovable, Perplexity, NotionAI implementations
- 30+ AI tools completely reverse-engineered

116k stars. 100% Opensource.Image
What's inside:

→ Exact system prompts from $20/month AI coding tools
→ Internal tool configurations
→ Model selection strategies
→ Prompt engineering patterns they use
→ Complete architecture of 30+ AI products

You're literally seeing the source code of how these tools work.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(