This might be the most important AI paper of the year.
DeepMind showed LLMs can actually reason with explicit rules.
No prompt hacks. No fine-tuning tricks.
Just real, general reasoning.
Let’s break it down:
For years, the line was:
“LLMs can’t really follow rules. They just mimic patterns.”
Turns out… that’s wrong.
This study shows LLMs can actually internalize rules and apply them in totally new situations just like humans.
Think of it like teaching someone a card game.
You explain the rules, play a few rounds…
Then hand them a completely different deck.
If they still win, they’re not memorizing they’re understanding.
That’s exactly what the researchers tested.
- Gave LLMs made-up rules they’d never seen
- Tested them with brand-new examples
- Measured how well they applied the rules
The result? Big models crushed it with 10–30% better accuracy.
And the best part?
The rules didn’t just stick in that one task.
They could be transferred to other problems or even different models.
That’s like teaching one student and suddenly the whole school gets smarter.
Why this matters:
Rule-following is critical in…
• Law
• Science
• Finance
• Safety systems
If AI can follow explicit rules, it’s more than just “creative” it’s reliable.
How they did it (simplified):
1. Create brand-new rules 2. Show the model correct examples 3. Throw brand-new, tricky problems at it 4. See if it applies the rules not just patterns
One thing was clear: Prompt clarity is king.
When the rule is explained cleanly in the prompt, performance jumps.
Vague, messy instructions? Accuracy tanks.
And sometimes?
Models nailed it on the first try no training, just from reading the rules.
That’s like reading chess rules and winning your first game.
But there’s a catch:
✅ Simple, short rules → learned fast
⚠️ Long, tangled rules → harder to master
Complexity still matters.
Limitations? yeah there are few.
• Mostly clean, synthetic data
• Real-world rules are messy
• Edge cases can trip models up
Still, it’s a big leap for symbolic reasoning in LLMs.
The takeaway:
LLMs aren’t just parrots.
With the right setup, they can:
• Learn explicit rules
• Apply them to new cases
• Share that knowledge
That’s getting close to true reasoning.
Imagine the possibilities:
• AI contract review that actually follows legal clauses
• AI tutors that enforce grammar/math rules
• Game AIs adapting instantly to brand-new mechanics
For builders:
• Clearer prompts = better rules
• Fine-tuning locks them in
• Smaller, focused rule sets work best
Think precision over complexity.
If LLMs can learn & transfer rules, the “fancy autocomplete” era is over.
We’re in the age of AI that can be taught like a student.
Grok might be the smartest stock trader on the planet.
But only if you know the right prompts.
Here are 10 to put your trades on autopilot 👇
1/ Market Analysis:
"Analyze the current trends in the stock market, focusing on [input sector or stock]. Identify any emerging patterns and suggest potential investment opportunities. Consider recent earnings reports and industry news in your analysis."
2/ Portfolio Diversification:
"Given a portfolio with a mix of [input current sectors or stocks], suggest strategies to diversify further while minimizing risk. Include potential sectors to explore and specific stocks to consider."
You can ask ChatGPT-4o to explain Warren Buffett’s portfolio, analyze market trends, and even spot risky stocks.
Here are 10 essential prompts for every trader:
1/ Market Analysis:
"Analyze the current trends in the stock market, focusing on [input sector or stock]. Identify any emerging patterns and suggest potential investment opportunities. Consider recent earnings reports and industry news in your analysis."
2/ Portfolio Diversification:
"Given a portfolio with a mix of [input current sectors or stocks], suggest strategies to diversify further while minimizing risk. Include potential sectors to explore and specific stocks to consider."