Ming
Apr 8 4 tweets 1 min read Read on X
After Claude Code writes my code, I make it review its own work.

/simplify spawns 3 AI reviewers in parallel:

one hunts dead code,

one checks naming and structure,

one profiles for performance. All running at the same time.
It reads your diff, launches three specialized agents simultaneously, then merges their findings.

Catches unused imports, redundant variables, overly complex conditionals, spots where shared logic should be extracted.

Not a linter. An actual code review.
AI-generated code works but it is often verbose.

A single review pass misses things.

Three parallel reviewers do not.

I run /simplify after every feature I build with Claude Code.

The code that ships is tighter than what either of us wrote alone.
I hope you've found this post helpful.

Follow me for more.

Subscribe to my FREE newsletter chatomics to learn AI and bioinformatics divingintogeneticsandgenomics.ck.page/profile x.com/433559451/stat…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ming "Tommy" Tang

Ming

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tangming2005

Apr 7
I built a personal AI assistant on a Mac Mini. Within 48 hours, cheap models had poisoned its memory with fabricated colleagues, fictional file shares, and an imaginary costume party. Here is what I learned. Image
The setup: OpenClaw as the agent framework, ClawRouter for model routing (Gemini Flash for simple tasks, Claude Sonnet for complex ones),

OpenViking for persistent memory.

All running locally on a $600 Mac Mini. Monthly API cost after optimization: $15-35.
First week: 335 requests over 4 days cost $8.09.

Same requests through Claude Opus would have been $152.83.

95% savings from routing simple tasks to cheap models. Sounds perfect, right?
Read 7 tweets
Apr 7
Two developers and 10 AI agents rewrote Claude Code from scratch in Rust. In one night.

The repo hit 50K GitHub stars in 2 hours. It now has 172K. It is called claw-code. Image
After Anthropic accidentally leaked 512K lines of Claude Code source through a bad npm package,

Sigrid Jin built a clean-room rewrite.

Not a copy. New language, new codebase.

48,600 lines of Rust, 40 tool specs, 9 crates. DMCA cannot touch it.
They used AI agents to rebuild an AI coding tool.

Humans gave direction in Discord.

10 AI "claws" coordinated, built, tested, and pushed code autonomously.

Two people wrote almost none of the 48K Rust lines by hand.
Read 5 tweets
Apr 6
1/ Biological data isn’t just messy.
Humans generate it.
And humans make mistakes.
As a bioinformatician, this will be your reality 🧵 Image
2/
Wet lab scientists are not spreadsheets.
They pipette, label, freeze, and extract.
Sometimes in a rush.
Sometimes while tired
3/
Mislabeling happens.
Cells get mixed.
Samples are swapped.
And your PCA plot?
Suddenly it makes no sense.
Read 17 tweets
Apr 1
Anthropic leaked 512,000 lines of Claude Code source through a misconfigured npm package.

They built a system called Undercover Mode to hide that their engineers use AI on open-source repos.

You cannot script this level of irony. Image
A security researcher found a source map in the npm package pointing to the full TypeScript source.

Posted it on X at 4AM. 28+ million views.

DMCA takedowns hit 8,100+ GitHub repos.

By then the code was already mirrored on platforms Anthropic cannot touch.
Best part: Sigrid Jin, who the WSJ says burned through 25 billion Claude Code tokens last year, rewrote the core architecture from scratch overnight.

Clean-room Python implementation called claw-code.

Now at 97K+ GitHub stars. DMCA-proof by design.github.com/instructkr/cla…
Read 5 tweets
Mar 29
Claude Code kept editing my .env file. I told it not to in the prompt. It did it anyway two sessions later.

So I set up a PreToolUse hook. Now it physically can't write to .env or config files. Blocked before it even tries.
Hooks live in .claude/settings.json. PreToolUse runs before a tool executes and can deny it.

PostToolUse runs after and can do cleanup.

I have another one that auto-formats code after every file edit. Set once, enforced every session. No discipline required.
Type /hooks to see what's configured in your project. Most Claude Code users don't know this exists.

Anthropic course here anthropic.skilljar.com/claude-code-in…
Read 4 tweets
Mar 26
I kept running /compact every 20 minutes wondering why my Claude Code sessions filled up so fast.

/context showed me exactly what was eating my tokens. Skills, auto-memory, MCP tools, CLAUDE.md. All mapped out.
Most people hit context limits and just run /compact. That's a band-aid. You free up space but the same stuff fills it right back up.

/context tells you the root cause so you can actually fix it.
Run /context before your next /compact. You might find tools or memory files eating tokens you didn't know about.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(