This is one of the most insane things Nano Banana Pro 🍌 can do.
It can reproduce figures with mind-blowing precision.
No competition in this regard!
Prompt: "Please reproduce this chart in high quality and fidelity and offer annotated labels to better understand it."
When I tried this for the first time, I didn't expect that this was possible.
The level of understanding this requires is what's remarkable about it all.
The levels of personalization this unlocks are also impressive.
"Can you convert it into a cartoonish version?"
Just look at this 🤯
"Can you create a delightful cartoonish version of this table. And please put cute colors and icons along with interesting annotations to make it more readable."
"Bring this figure to life by creating a detailed graphic that helps understand its inner workings. Use Leonardo Davinci sketch style."
More creative applications.
"These equations are scary. Can you please create a detailed infographic breaking it down and explaining in layman's terms what's happening and most importantly what it solves or does?"
Great for creating posters.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I see a lot of potential in the Skills feature that Anthropic just dropped!
Just tested with Claude Code. It leads to sharper and precise outputs.
It's structured context engineering to power CC with specialized capabilities, leveraging the filesystem.
I think it might be one of the best ways to really tap into the full potential of Claude Code.
Tune instructions, output formats, use of scripts, tools (MCP or otherwise), and more.
For specialized tasks, CC outputs dumb stuff at times; the idea here is to scope CC on demand.
An easy way to try Skills in Claude Code is by asking it to help you build one. I am surprised by how aware it is of Skills and how to build comprehensive ones.
This paper is one of the best deep dives yet on how reinforcement learning (RL) actually scales for LLMs.
The team ran over 400,000 GPU hours of experiments to find a predictable scaling pattern and a stable recipe (ScaleRL) that consistently works as you scale up compute.
Think of it as a practical guide for anyone trying to train reasoning or alignment models with RL.
More on why this is a big deal:
1. The big insight: RL progress follows a predictable curve.
When you plot model performance vs compute, the growth isn’t random; it follows a sigmoid (S-shaped) curve.
The curve has three simple knobs:
A = the best performance you’ll ever reach,
B = how efficiently you reach it,
C_mid = how much compute it takes to hit the halfway point.
The amazing part: you can fit this curve using just small runs and accurately predict how a 100k-hour run will behave.
So you no longer need to guess; you can forecast where your RL setup will top out before burning compute.
2. The ScaleRL recipe that just works.
The authors tested dozens of RL variations and found one that scales cleanly to 100k GPU hours without blowing up:
- PipelineRL (8 pipelines) with CISPO loss (a stabilized REINFORCE variant).
- Prompt-level averaging and batch-level normalization to reduce variance.
- FP32 logits for better stability and higher final accuracy.
- No-Positive-Resampling curriculum to avoid reward hacking.
- Forced interruptions (stopping long thoughts) instead of punishing long completions.
- This combo, called ScaleRL, hit the best trade-off between stability, sample efficiency, and asymptotic performance.
I just asked Claude Code (with Claude Sonnet 4.5) to develop an MCP Server (end-to-end) that allows me to programatically create n8n workflows from within Claude Code itself.
Took about 10 mins!
You can now create n8n workflows with pure natural language from Claude Code.
This is one of the top requests in our academy: how to automate the creation of n8n workflows.
It turns out that this is a great use case for MCP.
I've already created a huge repository of n8n agentic workflows, which I can now feed directly to Claude Code to help scale the creation of workflows.
It can even create/optimize prompts and all that good stuff. Automating context engineering is next, which Claude Code is really good at, too.