And before you show me the tech jobs going down graph that goes viral every week: know that most sectors see the “decline in jobs” from the pandemic peak: blog.pragmaticengineer.com/software-engin…
And this is not about denying the impact of GenAI for tech jobs. We will see smaller teams do more (already are). More demand for “top” software engineers, and most likely less for entry-level and “average” talent.
We don’t know (yet) if we will see an explosion of smaller teams/companies and if we’ll see a demand surge to take over/maintain “vibe coded” businesses as they start to scale
Tech people seek far better positioned to start software businesses (and startups) using GenAI tools more heavily. Either to provide GenAI services, or to use it to accelerate building MVPs (eg more PMs starting startups, more devs as well). We’ll likely see more data on all this in the future as well: this is what I mean by a potential explosion of smaller teams teams/businesses!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Here’s one reason Apple fought tooth and nail to disallow web payments for apps:
Because Apple’s IAP is bad in many ways, and *so many* apps will move to web-based payments now not mainly because of the 30% Apple fee, but because of how bad IAP is.
Let me give you examples:
1. Refunds
With Apple IAP it’s just not possible to do!! No, it really is not for the merchant. They cannot do a full or partial refund. Talk about poor customer support!
2. Group subscriptions. Nonexistent with IAP.
3. Paying using a non-credit card option. IAP does not allow
4. CUSTOMER SUPPORT
In general, with Apple’s IAP this is nightmare. (After you pay 30% more, mind you!)
You cannot do stuff like “we’re sorry for your trouble, would you like 3 months free or a full refund?”
5. Asking ppl why they cancel. NOPE! Not even after they cancel
Every now and then there's this prediction of when we will see the first one billion dollar company ran by one person...
... and I think back to how in 2016 there was this one product inside Uber that had crossed a $1B annual run rate that had a total of one dev allocated to it.
And half a data scientist (part-time).
It was cash.
Funny how headcount games can work inside fast-growing companies, especially when the product is a stated goal of what a founder does NOT want to support (but turns out to be essential!)
I only have second-hand details here but the story was along the lines of not being able to get official headcount (because when Uber was founded, no cash and no tipping were table stakes).
It only got funding after crossing the $1B landmark.
"We just fired an engineer after ~15 days on the job who lacked basics skills on the job but aced the interview - clearly, using cheat tools.
He admitted to how he did it: he used iAsk, ChatGPT and Interview Coder throughout"
(I personally talked with this person and know them well)
This company hired full remote without issue for years: this is the first proper shocker they have.
They are changing their process, of course. In-person interviews, in-part likely to be unavoidable.
As a first change, they have started to be lot more vigilant during remote interviews, and laying some "traps" that those using AI assistants will fall into.
Just by doing that they think about 10% of candidates are very visibly using these (they just stop interview processes with them)
I used Windsurf, but would work just as well with Cursor (and maybe VS Code as well now). Under the hood its all the same!
When setting up, took an hour to get it to work, thanks to my local npm + npx being out of date. Updated it and then worked fine.
The Windsurf MCP interface: just set up the Postgres one. But again behind the scenes its "just" an npm package that you can invoke from the command line as well! Which is the beauty of it
I'm starting to understand why there are company eng blogs not worth reading.
When doing a deepdive on an interesting company in @Pragmatic_Eng, we do research, talk with engineers, then share the draft back for any minor corrections. Usually it's a "LGTM." But sometimes:
Sometimes the Comms or Brand team gets actively involved, and mistakenly assume they are the editors, and attempt to rewrite the whole thing on how they would usually publish it on eg their blog.
Every time, it's a disaster to see, but also amusing. Because a good article becomes SO bad. Interesting details removed, branding elements added etc.
(We never allow edits - and if they insist we simply publish nothing, throwing out our research. This has not yet happened, but it might be the first time it will)
Btw here are some of the deepdives we did. In most cases, it was a "LGTM"
In other cases, we rejected edit attempts... because its not their engineering blog!
(The bigger the company the more sterile those edits can become, in general, btw.)
One thing that really bugs me about VCs and others projects claiming how AI will mean many devs redundant because smaller teams can do more with less: is ignoring the last.
Some of the most impactful / successful software was built by tiny teams in the 80s, 90s, 2000s. Like:
Microsoft’s first product in 1975 years ago: 2 devs
Quake in 1996: 9 devs
Google’s first search engine in 1998: 4 devs
We could go on.
Small teams with outstanding people doing great things happened before GenAI and will happen after as well (and without as well!)
What happened in all cases was the product got traction and there was more stuff to do that needed more outstanding people! So they hired more standout folks
The same will happen with GenAI: companies taking off thanks to using AI tools will hire more devs who can help them get more stuff done *using the right tools*. Some of those tools will be GenAI - but some of it not!