Louis Gleeson Profile picture
Founder of Sentient (25+ million follower network)

Aug 14, 17 tweets

The most in-demand skill right now isn’t coding.

It’s prompting.

One line of text can now get you:

• code
• designs
• strategies
• full apps

Here’s how to write prompts that actually work:

You’re going to learn:

• What great prompts look like
• How to structure them for better output
• 10+ expert techniques that boost accuracy, logic & creativity

Whether you're a beginner or pro this will level you up.

1. Beginner: Zero-Shot Prompting

Give the model a clear, specific instruction.

✅ "Summarize this article in 3 bullet points."
❌ "What do you think about this?"

Clarity > Creativity at this stage.

2. Beginner: Few-Shot Prompting

Show it examples. Like teaching by demonstration.

Prompt:
Q: What’s 5+5?
A: 10
Q: What’s 9+3?
A: 12
Q: What’s 7+2?
A: ?

This works because LLMs are pattern matchers.

3. Intermediate: Chain-of-Thought (CoT)

Make the model "think step-by-step."

This boosts reasoning dramatically.

Instead of:

"What's 13 * 17?"

Try:

"Let’s solve this step by step."

It will explain its thinking before answering.

4. Intermediate: Auto-CoT

Don't want to write examples yourself?

Auto-CoT does it for you.

Prompt the model to generate its own demos:

"Here are a few examples. Let’s think step by step."

Now you’ve got scalable reasoning with less effort.

5. Intermediate: Self-Consistency

Ask the model the same question multiple times.

Then pick the most common answer.

Why?

Because LLMs can vary and the most repeated answer is often the most reliable.

Ensemble thinking, but faster.

6. Advanced: Tree-of-Thoughts (ToT)

Don’t stop at one reasoning path.

Explore many, like a decision tree.

The model proposes, tests, and chooses from its ideas.

It’s how GPT-4 solves riddles, puzzles, strategy games.

7.Advanced: Graph-of-Thoughts (GoT)

Human thought isn’t linear.

So why force your prompts to be?

GoT lets LLMs combine, backtrack, and remix ideas.
Think of it like brainstorming with memory.

Great for creativity, planning, design.

8. Advanced: Self-Refine

Prompt → Output → Self-Critique → Improved Output

Let the model fix itself.

Prompt:

"Write a tweet. Now critique it. Now rewrite it based on your feedback."

This loop improves clarity, tone, and logic.

9. Expert: Chain-of-Code (CoC)

Want precision? Ask the model to reason in pseudocode or actual code.

Why?

Code forces structure and logic.
It reduces fluff, boosts accuracy.

Example:

"Write code to solve this step by step..."

10. Expert: Logic-of-Thought (LoT)

Inject formal logic.

Prompt the model to identify, verify, and reason using rules like:

If A implies B, and A is true, then B must be true.

Perfect for law, ethics, science, structured thinking.

Bonus: Reduce Hallucination

Hallucinations happen when models make stuff up.

Fix it with:

Retrieval Augmented Generation (RAG)

- ReAct (reason + act)
- Chain-of-Verification

Don’t just ask questions.

Ask it to check its own answers.

Bonus: Emotional Intelligence

Add tone.
Frame prompts to reflect intent.

"Give a calm explanation…"
"Explain like you're talking to a 10-year-old."
"Use a confident tone."

Prompt tone = output tone.

Prompt Writing is UX

You’re not talking to a robot.
You’re designing how the AI thinks.

Prompting is a language.

Master it, and you control the conversation.

The AI prompt library your competitors don't want you to find

→ Unlimited prompts: $15/month
→ Starter pack: $3.99/month
→ Pro bundle: $9.99/month

Grab it before it's gone ↓
godofprompt.ai/pricing

That's a wrap:

I hope you've found this thread helpful.

Follow me @godofprompt for more.

Like/Repost the quote below if you can:

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling