The 2021 Talent .io salary report is out. These reports work with the data they have, and it's clear that high-paying tech companies don't use "Europe's largest tech recruitment platform" at all, resulting in data that is off from reality.
A thread on why these reports are off:
1. Access to data. Looking at the Amsterdam data distribution, Adyen, Booking, Uber etc all don't have their data here. They all pay €90K+ for seniors in *base salary* - we'll talk about the rest. Uber and Booking €110K & above:
2. Total compensation vs salary. These reports focus on salary, but the highest paying companies often pay a lot more than just salary. E.g. at Uber I had years when my stock vesting that year was above my €100K+ salary. My bonus target was €22K as a senior engineer.
3. This report confirms what I have been saying: there are ranges invisible to most recruitment companies and employees on Tier 2, and especially Tier 3 ranges:
4. So where do you get better data? You ask around people you know. Go on Blind (the app). Check out levels.fyi. And I'm building techpays.eu that already has over 500 Netherlands/Amsterdam data points.
5. My next newsletter issue will be about how to find your next opportunity as a software engineer/engineering manager, including a list of (within inner circles) known companies that pay towards the top of the market.
These reports are good at showcasing #1 (Tier 1) compensation. They don't tell you *anything* about Tier 2 and Tier 3. Those companies use in-house recruiters and don't recruit through these platforms (or don't share their data at least).
Clearly they put a lot of effort writing the survey: but be very, very, very wary on basing compensation on this. You won’t be competitive even in Tier 1 if you do. Even the Tier 1 market has moved up the past months.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
"We just fired an engineer after ~15 days on the job who lacked basics skills on the job but aced the interview - clearly, using cheat tools.
He admitted to how he did it: he used iAsk, ChatGPT and Interview Coder throughout"
(I personally talked with this person and know them well)
This company hired full remote without issue for years: this is the first proper shocker they have.
They are changing their process, of course. In-person interviews, in-part likely to be unavoidable.
As a first change, they have started to be lot more vigilant during remote interviews, and laying some "traps" that those using AI assistants will fall into.
Just by doing that they think about 10% of candidates are very visibly using these (they just stop interview processes with them)
I used Windsurf, but would work just as well with Cursor (and maybe VS Code as well now). Under the hood its all the same!
When setting up, took an hour to get it to work, thanks to my local npm + npx being out of date. Updated it and then worked fine.
The Windsurf MCP interface: just set up the Postgres one. But again behind the scenes its "just" an npm package that you can invoke from the command line as well! Which is the beauty of it
I'm starting to understand why there are company eng blogs not worth reading.
When doing a deepdive on an interesting company in @Pragmatic_Eng, we do research, talk with engineers, then share the draft back for any minor corrections. Usually it's a "LGTM." But sometimes:
Sometimes the Comms or Brand team gets actively involved, and mistakenly assume they are the editors, and attempt to rewrite the whole thing on how they would usually publish it on eg their blog.
Every time, it's a disaster to see, but also amusing. Because a good article becomes SO bad. Interesting details removed, branding elements added etc.
(We never allow edits - and if they insist we simply publish nothing, throwing out our research. This has not yet happened, but it might be the first time it will)
Btw here are some of the deepdives we did. In most cases, it was a "LGTM"
In other cases, we rejected edit attempts... because its not their engineering blog!
(The bigger the company the more sterile those edits can become, in general, btw.)
One thing that really bugs me about VCs and others projects claiming how AI will mean many devs redundant because smaller teams can do more with less: is ignoring the last.
Some of the most impactful / successful software was built by tiny teams in the 80s, 90s, 2000s. Like:
Microsoft’s first product in 1975 years ago: 2 devs
Quake in 1996: 9 devs
Google’s first search engine in 1998: 4 devs
We could go on.
Small teams with outstanding people doing great things happened before GenAI and will happen after as well (and without as well!)
What happened in all cases was the product got traction and there was more stuff to do that needed more outstanding people! So they hired more standout folks
The same will happen with GenAI: companies taking off thanks to using AI tools will hire more devs who can help them get more stuff done *using the right tools*. Some of those tools will be GenAI - but some of it not!
A good reminder why you can pick up GenAI - and you probably should. Real story:
Small company, 5 devs. Last time they hired was 12 years ago. AI comes out: company wants to add AI feature. But they don't have the expertise. So hire an AI agency.
Agency spend 3 months planning:
After 3 months, the present a very complex architecture to build: several services multiple databases, SageMaker models etc, using a language a company is not using (Python - this is a Java shop)
It will take 6-9 months to build
Operational costs will be higher fort this one feature than all of the SaaS operational costs for the company!
Lead dev who is close to retiring (and has been at the company for 25 years) thinks "this cannot be right, surely."
So he says "screw it." Reads up on GenAI, builds a few prototypes and tells company to drop the agency: they will build it in ~3-4 months, much faster and cheaper.