🚨BREAKING: I finally understand how LLMs actually work.
And it’s why 90% of prompts fail.
Here are 10 techniques that turned my results upside down 👇
You'll find out:
- What makes a prompt great
- How to organize them for better results
- Over 10 expert tips to improve accuracy, logic, and creativity
Whether you're just starting or already skilled, this will help you improve.
1. Beginner: Zero-Shot Prompting
Give the model a clear and specific instruction.
- "Summarize this article in 3 bullet points."
- Avoid vague questions like "What do you think about this?"
Being clear is more important than being creative at this stage.
2. Beginner: few-shot prompting
Give examples, like showing how something is done.
Example:
Question: What’s 5+5?
Answer: 10
Question: What’s 9+3?
Answer: 12
Question: What’s 7+2?
Answer: ?
This method works because large language models recognize patterns.
3. Intermediate: Chain-of-Thought (CoT)
Make the model think through each step.
This greatly improves reasoning.
Instead of just asking:
"What's 13 * 17?"
Try saying:
"Let’s solve this step by step."
The model will explain its thinking process before giving an answer.
4. Intermediate: Auto-COT
Don't feel like writing examples on your own?
Auto-COT can handle it for you.
Just prompt the model to make its own demos:
"Here are a few examples. Let's think step by step."
Now you can have scalable reasoning with less work.
5. Intermediate: Self-Consistency
Ask the model the same question several times.
Then choose the answer that comes up the most.
Why?
Because these models can give different answers, and the one that repeats the most is usually the most trustworthy.
It's like team thinking, but quicker.
6. Advanced: Tree-of-Thoughts (ToT)
Don't just stick to one idea.
Explore different options, like a decision tree.
The model suggests, tests, and picks the best ideas.
This is how GPT-4 solves riddles, puzzles, and strategy games.
7. Advanced topic: Graph-of-Thoughts (GoT)
Human thought doesn't always follow a straight path.
So why should your prompts?
GoT allows language models to mix, revisit, and combine ideas, similar to brainstorming with memory.
It's useful for creativity, planning, and design.
8. Advanced: Self-Refine
Start with a prompt, get an output, then critique it, and end with an improved output.
Let the model make its own corrections.
Prompt:
"Write a tweet. Now critique it. Now rewrite it based on your feedback."
This process helps make things clearer, improves tone, and enhances logic.
9. Expert: Chain-of-Code (CoC)
Need accuracy? Have the model think in pseudocode or real code.
Why?
Code requires clear structure and logic.
It cuts out unnecessary details and improves accuracy.
Example:
"Write code to solve this one step at a time..."
10. Expert: Logic-of-Thought (LoT)
Use formal logic.
Ask the model to find, check, and think through rules like:
If A leads to B, and A is true, then B must also be true.
Great for subjects like law, ethics, science, and organized thinking.
Extra Tip: Stop Making Things Up
Sometimes models create information that's not real.
You can fix this with:
Retrieval Augmented Generation (RAG)
- ReAct (think and do)
- Chain-of-Verification
Instead of just asking questions, make sure it checks its own answers.
Extra Tip: Understanding Feelings
Think about the way you say things.
Make questions match your goal.
"Calmly explain…"
"Talk like you're speaking to a 10-year-old."
"Sound confident."
How you ask affects how it sounds.
I hope you've found this thread helpful.
Follow me @heygurisingh for more.
Like/Repost the quote below if you can:
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.