Nate Berkopec Profile picture
Jul 15 11 tweets 2 min read Read on X
I'm sold. Agentic coding is the future of web application development. There is no going back. Close the editor. Open Claude.

Your job is now to manage, review, corral and improve a relentless junior dev who is working on 6+ PRs in parallel.
If your mental model of LLM-mechanized coding is just tab-autocomplete in Cursor, you should try closing your editor for a week and only work out of Claude Code w/a parallel workflow using git worktrees (many tools now do this). Work on improving the agent if output is bad.
I'm less sold on agentic coding in the following contexts:

1. Libraries
2. Extremely large open-source codebases with long history (i.e. the METR study)
3. Concurrent/distributed systems
4. Probably more I can't think of. It's magical autocomplete, not a panacea.
I can see really high-skill, high-taste shops like 37signals struggling to adopt this because their very good taste means they are repelled by the slop. Bigger companies and small startups/indies will have an easier time at it, out of necessity.
In an agentic coding workflow, the LLM gets your 80% of the way there, and then has to work side-by-side with you to turn that "80%, mostly slop" output into "95%, ~senior level/sloppy staff level" output. You can tune the feedback loops so that the initial 80% pass gets better.
Don't just "add one more rule, bro" but look to establish deterministic pass/fail loops and let the LLM churn. You don't like the output because the methods are too complicated? Fail the tests if ABC score >25. Write the tests, disallow edit/write on them, and let it implement.
I think a lot of us are going to struggle to adopt this because the job is going to shift from "I can spend all day learning vi keybinds and yapping about OOP design principles" to "I am now a PM for an LLM". A lot of people are going to dislike this.
My advice to you if you feel that way is to find the things LLMs are consistently bad at (see up-thread) and move to specializing in those things. Because they are not bad at gluing together CRUD web apps, and 90% of us are paid to that.
If you're a builder first and a coder second, like me, you're probably very excited about this! You should be! You have now been given an indefatigable junior developer for $1000/month who will never complain! Go build things!
My final caveat is that all the old productivity metrics are now dead. PRs, commits, lines of code written all mean nothing. You cannot compare them to what you were doing before. Magical autocomplete can ship 10x the amount of PRS/commits/LOC you did before and it can be SHIT.
The measure of value is what is always has been: making stuff people like. Measure your productivity and the value of LLMs based on how quickly and how often you are doing that.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Nate Berkopec

Nate Berkopec Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @nateberkopec

Feb 14, 2022
ActiveRecord performance puzzle: "SELECT <long-list-of-columns>" sometimes takes ~10-40x longer than "SELECT *", randomly. Issue is not at the database, where query is consistently <1ms. Issue reproduces in Rails console.
Basically run it 10x in the console and you'll see 1ms, 1ms, 1ms, 40ms, 1ms, 1ms, 40ms, 1ms, 1ms.
More data: it isn't GC, it doesn't appear to be logging, and it does not occur in production
Read 5 tweets
May 7, 2021
In a production environment, the optimal Puma configuration for 80% of apps is probably 4 workers, 5 threads on a 4vCPU machine with ~8GB of memory.
and yes, in k8s, this would be per-pod, horizontal pod autoscaler running on top of that with request queue time as the metric
8GB for 4 workers is probably more memory overhead then you're ever gonna need but that 1vcpu:2GB ratio is the lowest that most cloud vendors sell so lower memory usage doesn't get you much.
Read 4 tweets
Mar 12, 2021
I put in a big PR to speed up my client's most important background job by 10-30%. This was really important because it generates 90%+ of their load. Let me walk you through it, commit-by-commit. (1/6)
First, we write some simple scripts to benchmark and profile the code in question. I used Benchmark from stdlib here because running this code once takes about 40 seconds. Note the warmup and use of transactions. github.com/WikiEducationF… (2/6)
Next, we fix an N+1. This is an example of what I call an "Enumerable" refactoring for an N+1. Rather than use the DB to do things like sum a column, we do it in Ruby. This is faster here b/c we were already going to load all the data anyway. github.com/WikiEducationF… (3/6)
Read 6 tweets
Apr 12, 2020
All of these qualities are not ineffable. We can (and should) quantify them: LOC, dependency LOC, hours spent fixing bugs and dealing w/incidents, and server costs. Your customer has requirements for these. You don't need to magically guess them on their behalf: you have to ask.
Craftsperson attitude is about delivering the highest possible software quality regardless of customer requirement. This is wrong. You should deliver what the customer requires: no more and no less.
And of course, you are the expert: so it's your role to help shape these requirements. Customers can be misguided about what scalability they require (usually they overestimate), or what maintenance they'll need to do (usually they underestimate).
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(