The 2021 Talent .io salary report is out. These reports work with the data they have, and it's clear that high-paying tech companies don't use "Europe's largest tech recruitment platform" at all, resulting in data that is off from reality.
A thread on why these reports are off:
1. Access to data. Looking at the Amsterdam data distribution, Adyen, Booking, Uber etc all don't have their data here. They all pay €90K+ for seniors in *base salary* - we'll talk about the rest. Uber and Booking €110K & above:
2. Total compensation vs salary. These reports focus on salary, but the highest paying companies often pay a lot more than just salary. E.g. at Uber I had years when my stock vesting that year was above my €100K+ salary. My bonus target was €22K as a senior engineer.
3. This report confirms what I have been saying: there are ranges invisible to most recruitment companies and employees on Tier 2, and especially Tier 3 ranges:
4. So where do you get better data? You ask around people you know. Go on Blind (the app). Check out levels.fyi. And I'm building techpays.eu that already has over 500 Netherlands/Amsterdam data points.
5. My next newsletter issue will be about how to find your next opportunity as a software engineer/engineering manager, including a list of (within inner circles) known companies that pay towards the top of the market.
These reports are good at showcasing #1 (Tier 1) compensation. They don't tell you *anything* about Tier 2 and Tier 3. Those companies use in-house recruiters and don't recruit through these platforms (or don't share their data at least).
Clearly they put a lot of effort writing the survey: but be very, very, very wary on basing compensation on this. You won’t be competitive even in Tier 1 if you do. Even the Tier 1 market has moved up the past months.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The brilliance: copyright does not protect derived works. Rewriting TypeScript code in Python means copyright no longer applies.
The scary thing: it can be done in trivial amount of time, with AI agents. This one was done with Codex.
This can be done not just for this specific codebase, but any codebase. So what happens with copyright? Will it evolve with AI, or be stuck pre-AI?
You can imagine Anthropic being in a pickle:
1. Do they just leave this, and look the other way, ignoring that it's not exactly fair to transform their code and leave it up there
2. Do they claim copyright applies... but this could be bad for their own business in much bigger ways: eg imagine regulation coming into play that bans this. Claude Code and other tools would have to refuse this kind of generation. Lawsuits against AI labs could spike etc
So my bet is #1 happens. Not the interest of an AI lab to expand copyright protections to derived work cretated by an LLM...
Eh. I just don’t buy this because I actually understand specific examples all too well:
1. It paints a picture of DoorDash disrupted by vibe coded alternatives. Dude. DoorDash / Uber moat is NOT software!! It’s real-world physical logistics. AI cannot disrupt DD…
2. The example of AI agents disrupting travel agents because AI agents can find cheaper travel deals than what travel agents offer. Also BS!!
I worked at Skyscanner (massive airline + hotel + car rental aggregator.) Travel agents have the most of offering the cheapest tickets / packages already!! Due to their deep integration, social deals.
In a world where AI agents find the cheapest deals: travel agents win, airlines get slightly less direct business!! AI agents go to Skyscanner, find cheapest deal from a travel agent, buy it!!
Then if you made a mistake you have no option to change it lol
So now the examples from two industries I know pretty well thanks to having worked there / been involved in them (travel agents + ridesharing/food delivery) read well but are just BS at the fundamental level… other parts I don’t know well read well…. but what are the chances it’s BS at its core?
Casey’s interaction with the “whistleblower” where he gradually realizes all “evidence” is AI-generated, designed to fool even journalists… then he confronts the faker. Worth the read
We’re entering a time when it’s harder to trust anything online: and surely more people will try to fool journalists with AI-generated “evidence.” In some cases, they will succeed, especially at publications chasing headlines and not doing proper investigation / reporting!
For the last ~20 years, I did most of my coding inside an IDE - the last ~15 with increasingly good autocomplete.
Which is why it’s so weird that I barely opened an IDE the last two weeks, even as I pushed lots of code. I use the CLI, the web and my phone (!!) to prompt code
When I just started out developing I remember being so so so full of ideas that I was coding in my head and wished I could have done programming while commuting / on the bus. With eg a phone. But it was impossible, ofc.
Now it’s possible!! A massive change
I feel we’re in the middle of the biggest dev tooling change happening across the industry - and it’s happening over a few short months. And rapidly spreading everywhere.
Amusing: Google does not allow its devs to use its newly launched IDE, Antigravity, for development.
They can only use an internal version called Jetski: also built by the Antigravity team, with Google-speicfic features (eg monorepo support, docs search etc)
Using Antigravity is specifically disallowed and devs cannot sign up to it with a @google.com work address
The reason for this “ban” is, of course, Google’s “tech island” tech stack: Antigravity is simply not compatible with its monorepo, and not integrated to Google’s custom tooling.
Jetski has all of this - but it's a different product. A bit like Borg vs GCP (most of Google doesn't use GCP!)
Covered a lot more on Google’s unique culture (and how they have probably the most custom tech stack across Big Tech) in this deepdive: newsletter.pragmaticengineer.com/p/google