How do you bring up the topic of promotions with your manager?
My 7 pieces of advice (thread) 1. Understand how proms work at your company. 2. Talk with your manager: get them on your side. If you don't bring it up: don't expect it to happen.
2. (Cont'd) It's in all managers' interest to have people promoted who are already performing at the next level. Makes the manager look good! You're on the same team. 3. Be realistic about what it takes to be promoted above the senior levels. These are usually far more difficult.
4. Set goals to "close the gap" that you have compared to the next level. Act like you would like if you had the title. Keep a work log. 5. Find a mentor within the company. Ask for regular feedback.
6. Don't "blindly chase" the promotion, alienating others. Stay grounded, but put in the work. 7. Don't have promotion be your only goal. Aim for professional growth, over chasing titles.
Writing is one of the best things you can invest in, as a software engineer. The more experienced people become, the more they tend to realize this.
Here's a thread on the 6 best writing resources I've found - both to "convince" you to write more and to help you "level up":
1. My extended thoughts on why writing is an undervalued software engineering skill, and the tools (Grammarly, Hemmingway) and books (Writing Well, Sense of Style) that helped me improve my writing.
Writing becomes *so* important at larger companies.
A list of tech companies and their experimentation platforms. If you're an engineer and use (or want to use) experimentation / AB tests/feature flags, this is worth a read. A thread.
2. Doordash. Reading most of this was "deja vu: this is *so* similar to what we are doing at Uber!" It's a good writeup : doordash.engineering/2020/09/09/exp…
Considering how fast Doordash went from <10% market share to market leader in the US, they definitely know how to experiment well.
One of the challenges at Uber was building monitoring and alerting that worked reliably.
The problem was how Uber was (is) city-based and global alerting would not catch regional (city/country-level issues).
Two stories on why this is difficult:
1. A PayPal employee on a Japan business trip alerted us in 2016 that PayPal is not working there. He was right: it wasn’t working for 2+ months, across the country. How did we miss it?
There were 20 PayPal trip attempts/day only, and Japan was one of 60+ countries.
In the global scale, this accounted for a tiny fraction.
So we did what makes sense: added country-level alerting.
First, this became a data cardinality problem. 1,000 cities x 15 payment methods... not trivial to track all. We settled on countries & top cities.
“Why does {company/app} have more than X engineers?” where X typically greater than 20/50/100.
Here’s how and why this makes sense for *the company* from a business-point of view. A thread.
1. What you see as “one app” is indeed, a lot of small parts that all contribute to the company making money.
Take the Twitter app. Almost all functionality (timeline, lists, profile, moments etc) are here to drive engagement. Then there’s ads and ad tools (I’m simplifying ofc)
2. A company never asks “how many engineers do we need overall?” They look at business cases.
“If we hire 4 engineers, we can build Lists. We expect to reduce churn by 4% annually which results in $15M/yr revenue. The cost of this team is A LOT less than this.”
Thread on 7 things I now have more appreciation for, having experienced them first hand.
1. Marketing. "Build it and they will come" is not how products (or books) are bought. You need a marketing plan.
I put one together late: and delayed the launch to get some of the marketing ideas going. It was worth it, in the end.
2. Media exposure. My own "marketing network" was far smaller compared to exposure on a large publication (like HN). You can't really plan for or rely on this as marketing, but these are bigger waves than one can expect. @philip_kiely has a similar story.
I'm going to attempt to summarize the AWS outage on 25 Nov that impacted a good part of the internet in 6 drawings (from the 2,000+ word detailed postmortem by @awscloud at aws.amazon.com/message/11201/). A thread.
1. Meet AWS Kinesis, the realtime processing backbone of AWS:
2. Incoming requests hit the FE fleet. Each FE machine maintains a shardmap to BE clusters. Machines in this cluster do the realtime processing.
Classic setup. Except for the scale, which we can assume is massive. That "frontend fleet" is likely large. The BE fleet? Gigantic.
3. New FE machines were added to the FE fleet as per usual. A few hours later, things start to work odd. The team investigates and another few hours later they realize the root cause. It's to do with how the FE fleet works.
Each FE machine has a thread open to sync in the fleet: