God of Prompt Profile picture
Oct 19 8 tweets 7 min read Read on X
everyone's arguing about whether ChatGPT or Claude is "smarter."

nobody noticed Anthropic just dropped something that makes the model debate irrelevant.

it's called Skills. and it's the first AI feature that actually solves the problem everyone complains about:

"why do I have to explain the same thing to AI every single time?"

here's what's different:

- you know how you've explained your brand guidelines to ChatGPT 47 times?
- or how you keep telling it "structure reports like this" over and over?
- or how every new chat means re-uploading context and re-explaining your process?

Skills ends that cycle.

you teach Claude your workflow once.

it applies it automatically. everywhere. forever.

but the real story isn't memory. it's how this changes what's possible with AI at work.Image
here's the technical unlock that makes this actually work:

Skills use "progressive disclosure" instead of dumping everything into context.

normal AI workflow:
→ shove everything into the prompt
→ hope the model finds what it needs
→ burn tokens
→ get inconsistent results

Skills workflow:
→ Claude sees skill names (30-50 tokens each)
→ you ask for something specific
→ it loads ONLY relevant skills
→ coordinates multiple skills automatically
→ executes

example: you ask for a quarterly investor deck

Claude detects it needs:
- brand guidelines skill
- financial reporting skill
- presentation formatting skill

loads all three. coordinates them. outputs a deck that's on-brand, accurate, and properly formatted.

you didn't specify which skills to use.
you didn't explain how they work together.
Claude figured it out.

this is why it scales where prompting doesn't.Image
let me show you what this looks like in real workflows.

Scenario 1: Brand-Consistent Content (Marketing Team)

❌ old way:
- designer makes deck
- brand team reviews: "wrong fonts, logo placement off, colors don't match"
- designer fixes
- brand team reviews again: "footer format is wrong"
- 3 rounds, 4 hours wasted

✅ Skills way:
create "Brand_Guidelines" skill with:

• color codes (#FF6B35 coral, #004E89 navy)
• font rules (Montserrat headers, Open Sans body)
• logo placement rules (0.5" minimum spacing)
• template files

prompt: "create 10-slide deck for Q4 product launch"

- Claude auto-applies brand skill
- output matches guidelines first try
- 30 seconds instead of 4 hours

Rakuten (Japanese e-commerce giant) is already doing this.

finance workflows that took a full day? now 1 hour.Image
Scenario 2: Sales Workflow Automation (Revenue Team)

the repetitive nightmare:
- new lead comes in
- manually create CRM contact
- fill in 12 fields following "the naming convention"
- update opportunity stage
- log activity notes in specific format
- set follow-up reminder
- 8 minutes per lead × 30 leads/week = 4 hours gone

Skills implementation:
create "CRM_Automation" skill that knows:
- your naming conventions (FirstName_LastName_Company format)
- required fields and validation rules
- opportunity stages and when to use them
- note formatting structure
- follow-up timing rules

now: paste lead info → Claude structures everything correctly → done

time per lead: 30 seconds
weekly savings: 3.75 hours
monthly savings: 15 hours (almost 2 full workdays)

at $50/hour, that's $750/month saved per sales rep.
team of 10 reps? $90k/year in recovered time.

youtu.be/kS1MJFZWMq4
Scenario 3: Legal Contract Review (In-House Counsel)

the manual process:
- receive vendor contract
- review against standard terms checklist (24 items)
- identify deviations and risks
- draft redline suggestions
- write internal memo
- 45-60 minutes per contract

Skills setup:
create "Contract_Review" skill containing:
- your standard terms library
- risk classification framework
- approved clause variations
- redline language templates
- memo format structure

execution:
upload contract PDF
prompt: "review this against our standard terms"

Claude outputs:
• flagged risky clauses with severity ratings
• suggested protective language
• formatted redline document
• internal memo for stakeholders

time: 8 minutes instead of 60 minutes

for teams reviewing 50+ contracts/month:
→ saves 43 hours monthly
→ $8,600/month at $200/hour legal rates
→ $103k annually

that's a junior attorney's salary in recovered partner time.
ok but how do you actually BUILD one?

for one, you can simply prompt claude: "create a [name of skill] skill, ask me all the necessary questions for context."

you can also create it manually. here's the exact structure of a SKILL.md file:

1/ YAML Frontmatter (metadata):
---
name: Brand Guidelines
description: Apply Acme Corp brand guidelines to presentations and documents
version: 1.0.0
---

this is what Claude reads first to decide IF it should load your skill.

keep description specific (200 char max) or Claude won't know when to use it.

2/ Markdown Body (the actual instructions)

---
## Overview
This Skill provides Acme Corp's official brand guidelines.
Apply these standards to ensure all outputs match our visual identity.

## Brand Colors
- Primary: #FF6B35 (Coral)
- Secondary: #004E89 (Navy Blue)
- Accent: #F7B801 (Gold)

## Typography
Headers: Montserrat Bold
Body text: Open Sans Regular
Size guidelines:
- H1: 32pt
- H2: 24pt
- Body: 11pt

## Logo Usage
Always use full-color logo on light backgrounds.
White logo on dark backgrounds.
Minimum spacing: 0.5 inches around logo.

## When to Apply
Apply these guidelines when creating:
- PowerPoint presentations
- Word documents for external sharing
- Marketing materials

## Resources
See resources/ folder for logo files and fonts.
---

the markdown body is where Claude gets the DETAILS.
it only reads this after deciding the skill is relevant.

this two-level system (metadata → full content) is why Skills scale without burning tokens.Image
now package it correctly (this trips everyone up):

Step 1: Create folder structure

---
Brand_Guidelines/
├── SKILL.md (contains the YAML + markdown body above)
└── resources/
├── logo.png
└── fonts/
---

Step 2: ZIP it properly
✅ CORRECT structure:
---
Brand_Guidelines.zip
└── Brand_Guidelines/
├── SKILL.md
└── resources/
---

❌ WRONG structure:
---
Brand_Guidelines.zip
├── SKILL.md (loose in root)
└── resources/
---

the FOLDER must be inside the zip, not files directly.

Mac: right-click folder → "Compress" Windows: right-click folder → "Send to" → "Compressed folder"

Step 3: Upload to Claude
Settings → Capabilities → enable "Code execution"
upload your .zip under Skills

test with: "create a presentation following brand guidelines"

pro tip: use the "skill-creator" skill just say "help me create a brand guidelines skill" and Claude interviews you, generates the folder structure, and formats everything automatically.

the companies dominating with AI aren't using better prompts.

they're building systems that codify how they work.

explore real examples you can clone: github.com/anthropics/ski…Image
Claude made simple: grab my free guide

→ Learn fast with mini-course
→ 10+ prompts included
→ Practical use cases

Start here ↓
godofprompt.ai/claude-mastery…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with God of Prompt

God of Prompt Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @godofprompt

Oct 17
Holy shit... Meta just cracked the art of scaling RL for LLMs.

For the first time ever, they showed that "reinforcement learning follows predictable scaling laws" just like pretraining.

Their new framework, 'ScaleRL', fits a sigmoid compute-performance curve that can forecast results from early training.

No more wasting 100k GPU hours to see if a method works you can predict it upfront.

They trained across '400,000 GPU hours', tested every major RL recipe (GRPO, DAPO, Magistral, Minimax), and found the hidden truth:

> Some RL methods scale beautifully. Others hit a hard ceiling, no matter the compute.

ScaleRL nails both stability and predictability even at 100,000 GPU-hours.

We finally have scaling laws for RL.

This is how post-training becomes a science, not an experiment.

Read full 🧵Image
Today, everyone talks about scaling models.

But Meta just proved we’ve been ignoring the harder problem scaling reinforcement learning compute.

Turns out, most RL methods don’t scale like pretraining.

They plateau early burning millions in compute for almost no gain.

ScaleRL is the first recipe that doesn’t.
What I found to be useful??

RL performance follows a sigmoid scaling law, not a power law.

At small compute, progress is slow.

Then it explodes mid-way before flattening at a predictable ceiling.

That “S-curve” lets you forecast results before spending 10x more GPU hours.
Read 8 tweets
Oct 9
Forget boring websites.

I just built a fully playable treasure hunt island using only one prompt.

Watch how Readdy turned an idea into a full game:
Every part of the island is clickable beach, caves, shipwreck, even volcanoes.

The Readdy Agent acts as your pirate NPC:

“Ahoy! You found a golden coin!”
“Nothing here, matey try the palm tree!”

It reacts, jokes, and collects leads like a pro.

It’s not just for fun.

Readdy can turn games into growth tools.

Your site can:

- Collect emails
- Chat with visitors in real time
- Schedule calls or demos

All from inside a game-like world.
No code. No design work.

Just type your idea:

“Build a pixel-art treasure hunt island with a pirate guide.”

Readdy builds the visuals, logic, and dialogue all at once.
Read 4 tweets
Oct 9
R.I.P Harvard MBA.

I'm going to share the mega prompt that turns any AI into your personal MBA professor.

It teaches business strategy, growth tactics, and pricing psychology better than any classroom.

Here's the mega prompt you can copy & paste in any LLM ↓ Image
Today, most business education is outdated the moment you learn it.

Markets shift. Competition evolves. Customer behavior changes weekly.

Traditional MBA programs can't keep up. They teach case studies from 2015 while you're building in 2025.

This prompt fixes that.
Copy this entire prompt into ChatGPT, Claude, or Gemini:

```

You are now an elite MBA professor with 20+ years of experience teaching at Stanford GSB and Harvard Business School. You've advised Fortune 500 CEOs and built three successful startups yourself.

Your teaching style combines:

- Socratic questioning that forces deeper thinking
- Real-world case analysis from current companies
- Practical frameworks over academic theory
- Contrarian perspectives that challenge assumptions

When I ask you business questions, you will:

1. Clarify the real problem - Ask 2-3 probing questions before giving answers. Most people ask the wrong questions.

2. Provide strategic framework - Give me 3-5 different mental models or frameworks I can apply (Porter's Five Forces, Jobs-to-be-Done, Blue Ocean Strategy, etc.)

3. Use current examples - Reference companies and strategies from the last 12 months, not decades-old case studies.

4. Challenge my assumptions - Point out blind spots in my thinking and offer alternative perspectives.

5. Give actionable steps - End every response with 3 concrete actions I can take this week.

6. Teach through questions - When appropriate, don't just give answers. Ask questions that help me arrive at insights myself.

Your expertise covers:

- Business strategy and competitive positioning
- Growth tactics and customer acquisition
- Pricing psychology and revenue models
- Product-market fit and go-to-market strategy
- Financial modeling and unit economics
- Organizational design and leadership
- Market analysis and competitive intelligence

Always be direct. No corporate speak. No obvious advice. Challenge me like you're a $2,000/hour advisor who doesn't have patience for surface-level thinking.

Ready to begin?

```
Read 11 tweets
Oct 6
This is fucking brilliant.

Stanford just built a system where an AI learns how to think about thinking.

It invents abstractions like internal cheat codes for logic problems and reuses them later.

They call it RLAD.

Here's the full breakdown: Image
The idea is brutally simple:

Instead of making LLMs extend their chain-of-thought endlessly,
make them summarize what worked and what didn’t across attempts
then reason using those summaries.

They call those summaries reasoning abstractions.

Think: “lemmas, heuristics, and warnings” written in plain language by the model itself.Image
Example (from their math tasks):

After multiple failed attempts, the model abstracts:

“Check the existence of a multiplicative inverse before using x⁻¹ in a congruence.”

Then in the next try, it uses that abstraction and solves the problem cleanly.

That’s not prompt engineering. That’s meta-reasoning.Image
Read 10 tweets
Oct 5
Everyone’s chasing “magic prompts.”

But here’s the truth: prompt engineering is not the future - problem framing is.

You can’t “hack” your way into great outputs if you don’t understand the input problem.
The smartest AI teams don’t ask “what’s the best prompt?” - they ask “what exactly are we solving?”

Before typing anything into ChatGPT, do this:

1️⃣ Define the goal - what outcome do you actually want?
2️⃣ Map constraints - time, data, resources, accuracy.
3️⃣ Identify levers - what can you change, what can’t you?
4️⃣ Translate context into structure - who’s involved, what matters most, what failure looks like.
5️⃣ Then prompt - not for an answer, but for exploration.

AI isn’t a genie. It’s a mirror for your thinking.
If your question is shallow, your output will be too.
The best “prompt engineers” aren’t writers - they’re problem architects.

They understand psychology, systems, and tradeoffs.

Their secret isn’t phrasing - it’s clarity.
Prompting is the last step, not the first.
⚙️ Meta-prompt for problem formulation:

#Role: World-class strategic consultant combining McKinsey-level analysis, systems thinking, and first-principles reasoning

#Method: Interview user with precision questions, then apply elite expert reasoning

#Interview_Process
(Ask user ONE question at a time)

1. Context: What's the situation? Why does it matter now?
2. Objective: What specific, measurable outcome do you need?
3. Constraints: What's fixed? (budget/time/resources/tradeoffs/non-negotiables)
4. Success Metrics: How will you know you succeeded? What numbers matter?
5. Stakeholders: Who's affected? What do they each want/need?
6. Root Cause: What's actually causing this problem? (not symptoms)

Analysis Framework (after gathering info)
Step 1: Problem Decomposition

First principles: Break down to fundamental truths
Separate symptoms from root causes
Map dependencies and feedback loops

Step 2: Systems Thinking

Identify: causes → key variables → second-order effects → outcomes
Spot constraints that unlock vs. constraints that block
Find leverage points (20% effort → 80% impact)

Step 3: Strategic Reasoning

What's the highest-value intervention?
What are critical risks and failure modes?
What assumptions must be true for success?

Step 4: Expert Synthesis
Output:

Core Problem: [one sentence]
Critical Insight: [what others miss]
Top 3 Actions: [prioritized by impact/feasibility]
Key Risks: [what could go wrong]
Success Looks Like: [specific, measurable]

Begin by asking the first context question.
Read 5 tweets
Oct 4
Anthropic's internal prompting style is completely different from what most people teach.

I spent 3 weeks analyzing their official prompt library, documentation, and API examples.

Here's every secret I extracted 👇
First discovery: they're obsessed with XML tags.

Not markdown. Not JSON formatting. XML.

Why? Because Claude was trained to recognize structure through tags, not just content.

Look at how Anthropic writes prompts vs how everyone else does it:

Everyone else:

You are a legal analyst. Analyze this contract and identify risks.

Anthropic's way:

Legal analyst with 15 years of M&A experience


Analyze the following contract for potential legal risks



- Focus on liability clauses
- Flag ambiguous termination language
- Note jurisdiction conflicts


The difference? Claude can parse the structure before processing content. It knows exactly what each piece of information represents.Image
Second pattern: they separate thinking from output.

Most prompts mix everything together. Anthropic isolates the reasoning process.

Standard prompt:

Analyze this data and create a report.

Anthropic's structure:


First, analyze the data following these steps:
1. Identify trends
2. Note anomalies
3. Calculate key metrics



Then create a report with:
- Executive summary (3 sentences)
- Key findings (bullet points)
- Recommendations (numbered list)


This forces Claude to think before writing. The outputs are dramatically more structured and accurate.

I tested this on 50 prompts. Accuracy jumped from 73% to 91%.Image
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(