Ruben Hassid Profile picture
Oct 8 15 tweets 3 min read Read on X
I bet 99% of people who use ChatGPT don't know how to set it up to make it 10x more useful.

They obsess over prompts, but prompts are only 20% of the equation.

Setup is 80%.

In this thread, I'll show you how to (actually) set up your ChatGPT:
→ First, take a look at how much ChatGPT knows about you.

Then, delete everything.

Go to Settings > Personalization > Manage Memories > Delete All Memories.

Most users have 6+ months of random, contradictory memories that actively hurt performance.
→ The 13-Question Framework.

Use this prompt with GPT-5: "Design my ChatGPT digital twin. Ask 13 items one at a time:

Identity, Role, Audience, Outputs, Personality, Voice, Formatting, Values, Projects, Goals, Preferences, Do-Not List, Privacy.

Target <1300 words total." Image
Why 13 questions?

Each captures a critical dimension: who you are (identity), what you do (expertise), who you serve (audience), what you create (outputs), how you think (personality).

Also how you communicate (voice), what matters (values), what you're building (goals). Image
Remember, ChatGPT memory snippets max out at 350 characters each.

→ After answering the 13 questions, insert prompt:

"Create 13 snippets I can copy/paste to update memory. Each is 350 characters max. Cover everything without redundancy." Image
→ Then: "Maximize each snippet to the character limit."

This forces ChatGPT to compress your identity efficiently.

Every character counts toward building your digital twin.
→ Installation Method

Open GPT-5, then copy/paste this 13 times with each snippet:

"UPDATE YOUR MEMORY WITH THE FOLLOWING SNIPPET, EXACTLY THESE WORDS: [snippet]".

This builds your digital twin systematically instead of randomly.
Once ChatGPT knows you well, it becomes biased toward your patterns.

Temporary chats = raw model without customization.

Use when you need: unbiased perspectives, testing new approaches, or ChatGPT to challenge rather than affirm your thinking.
→ One-Shot Prompting Architecture

This is the 3-block structure that eliminates back-and-forth and cuts conversation time by 60% while improving quality: Image
Real power users spend 90% time on setup, 10% on prompts.

Back-and-forth conversations are the sign of poor setup, not sophisticated use.

Your goal is to provide Role + Example + Request in ONE message.

If you're iterating multiple times, your memory setup is incomplete.
Use temporary chat when you don't want AI remembering anything to prevent context bleeding into future responses.

Perfect for one-off questions where past context would muddy the answer.
→ When Few-Shot Beats One-Shot?

Rare cases: varied formats, nuanced categories, high-risk tasks.

Rule of thumb: Zero-shot = speed. One-shot = clarity + single format (80% of cases). Few-shot = complexity + variety (20% of cases).

Most people overuse few-shot and waste time.
Your ChatGPT setup determines 80% of output quality. Your individual prompts determine 20%.

Everyone teaches prompts.

Almost nobody teaches setup.

This is why most people get mediocre results despite "knowing AI", they're optimizing the wrong 20%.
After proper setup, lawyers use ChatGPT to draft contracts in their exact style.

Writers use it to capture trends and write briefs.

Researchers use it for deep search with context.

The difference? ChatGPT knows WHO they are before starting WHAT they want.
You deserve an assistant who understands your goals, voice, and preferences.

Setup first. Prompts second. Always.

Follow me @RubenHssd for more content like this in the future.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ruben Hassid

Ruben Hassid Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @RubenHssd

Sep 21
A new Yale paper reveals the brutal reality of the AGI economy:

Half the population could stop working tomorrow and GDP wouldn't budge.

Humans become economically meaningless.

The paper suggests we'll keep our jobs but lose something far more important: Image
We lose our economic purpose.

For centuries, human labor drove progress. We built cities, advanced science, created wealth. Work meant you mattered.

In the AGI economy, that connection breaks.

We keep jobs but lose our role as drivers of growth and progress.
The key insight comes from distinguishing "bottleneck" vs "accessory" work: Image
Read 9 tweets
Aug 27
BREAKING: New Stanford study tracking 25 million US workers finds AI is systematically eliminating entry-level jobs.

Here are 6 disturbing facts from one of the largest AI employment study ever conducted:

(hint: young workers are getting obliterated) Image
Fact 1: Employment for early-career workers (ages 22-25) has declined substantially in occupations most exposed to AI.

Software developers aged 22-25 saw nearly 20% employment decline since late 2022, while older workers in the same occupations continued to grow. Image
Fact 2: Overall employment continues to grow robustly, but employment growth for young workers has been stagnant since late 2022.

In the highest AI-exposed occupations, young workers declined 6% while older workers in those same occupations grew 9%. Image
Read 12 tweets
Aug 25
For the first time, Google has measured how much energy AI really uses in production.

Spoiler: the gap vs. all previous estimates is huge... 🧵 Image
Despite AI transforming healthcare, education, and research, we've been flying blind on its environmental footprint.

Every estimate was based on lab benchmarks, not real-world production systems serving billions of users.

Google decided to measure what actually happens. Image
The results from measuring Gemini in production:

• 0.24 watt-hours per text prompt
• Equivalent to watching TV for 9 seconds
• 5 drops of water consumed
• 0.03 grams of CO2 emissions

Substantially lower than public estimates. Image
Read 14 tweets
Aug 12
Meta just won the world's biggest brain competition by building an AI that can READ YOUR MIND while you watch movies.

1st place out of 263 teams.

This is the most insane paper I've ever read: 🧵

(hint: mind reading is here)
For context, the Algonauts competition challenged teams to build AI that predicts brain activity from videos.

263 teams competed.

Meta crushed it with the biggest 1st-2nd place gap ever.

Let me break down how: Image
TRIBE (TRImodal Brain Encoder) is the first AI trained to predict brain responses across multiple senses simultaneously.

Most brain studies focus on one thing; vision OR hearing OR language.

TRIBE does all three at once, just like your actual brain. Image
Read 17 tweets
Aug 5
China built a computer with 2 billion neurons mimicking a monkey's brain.

If Moore's Law is still valid, we will have human-level brain computers with 86 billion neurons by 2033.

We are closer to duplicating humans.

Thread Image
China's progress is insane:

2020: Darwin Mouse (120 million neurons)
2025: Darwin Monkey (2 billion neurons)
2027: 4 billion neurons
2030: 16 billion neurons
2033: 86 billion neurons ← Human brain level

China went from mouse to monkey in 5 years. Image
What does a human brain computer actually mean?

Every thought, memory, and decision you make could theoretically be replicated in silicon.

We're talking about artificial consciousness that thinks like you do.
Read 11 tweets
Aug 3
NVIDIA just dropped paper exposing a $57 billion AI industry mistake.

While Big Tech keeps pushing expensive LLMs like ChatGPT & Claude...

Small language models handle 70% of AI agent work at 1/30th the cost.

Here's why this changes everything:

(hint: less is more) Image
→ The $57 billion mistake ↓

The AI industry invested massively in centralized LLM infrastructure in 2024.

But the actual market for LLM API services is only $5.6 billion.

That's a 10x gap between investment and revenue no one wants to admit. Image
→ Most companies are betting everything on one operational model that may be fundamentally flawed.

They assume centralized, generalist LLMs will remain the cornerstone without substantial alterations.

The problem? This assumption is about to get very expensive. Image
Read 19 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(