A list of tech companies and their experimentation platforms. If you're an engineer and use (or want to use) experimentation / AB tests/feature flags, this is worth a read. A thread.
2. Doordash. Reading most of this was "deja vu: this is *so* similar to what we are doing at Uber!" It's a good writeup : doordash.engineering/2020/09/09/exp…
Considering how fast Doordash went from <10% market share to market leader in the US, they definitely know how to experiment well.
4. LinkedIn shipped an improved experimentation engine a year ago (engineering.linkedin.com/blog/2020/maki…). What's great about this article is they share load numbers: 800K QPS (!!), 35K concurrent experiments (!).
5. Zalando. Listing this as it's "just out" a few days ago (engineering.zalando.com/posts/2021/01/…). Pretty cool overview of the adoption and a story you can relate to even if you're not working at a massive-sized company.
6. Airbnb (medium.com/airbnb-enginee…). This one is from 3 years ago, it gives a high-level overview and a neat UI visualization. Precomputed dimensional cuts seems like a big win (and an idea to copy).
7. Netflix. They were one of the first to share on how they did experimentation and this article is a classic (and highly read one): netflixtechblog.com/its-all-a-bout…
I can't recall seeing a follow-up since: I wonder if thete's been any major changes since?
8. Grab (engineering.grab.com/feature-toggle…). Nice to see an article that shows some actual code and configuration. If you build your own, this is good inspiration. Unfortunately, they don't specify what platforms and use cases this SDK is for: backend? Mobile? Web? All of them?
9. Intuit. I came across this one accidentally: Intuit open sourced their A/B testing platform (github.com/intuit/wasabi) and wrote this article about it (intuit.com/blog/technolog…). Used in 120 apps(!!) across web, mobile and desktop.
Anyone used this project?
10. Pinterest's platform. Also an early, 2016 writeup (medium.com/pinterest-engi…). I love how experiment changes need to go through code reviews!! Not the case in most places I've seen.
11. Of course, let's talk about the thing you're reading it this thread on: Twitter. This article is from 2015 (blog.twitter.com/engineering/en…) so I assume the infra probably changed since (in Manhattan still the DB powering Twitter?). The article is less of an overview, though.
These are some more interesting ones. Know of other notable mentions, and ones I've missed? Please reply!
All of this & more will be part of my book, Building Mobile Apps at Scale- and you can grab the PDF version for free (special shoutout to @bitrise). mobileatscale.com
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One of the challenges at Uber was building monitoring and alerting that worked reliably.
The problem was how Uber was (is) city-based and global alerting would not catch regional (city/country-level issues).
Two stories on why this is difficult:
1. A PayPal employee on a Japan business trip alerted us in 2016 that PayPal is not working there. He was right: it wasn’t working for 2+ months, across the country. How did we miss it?
There were 20 PayPal trip attempts/day only, and Japan was one of 60+ countries.
In the global scale, this accounted for a tiny fraction.
So we did what makes sense: added country-level alerting.
First, this became a data cardinality problem. 1,000 cities x 15 payment methods... not trivial to track all. We settled on countries & top cities.
“Why does {company/app} have more than X engineers?” where X typically greater than 20/50/100.
Here’s how and why this makes sense for *the company* from a business-point of view. A thread.
1. What you see as “one app” is indeed, a lot of small parts that all contribute to the company making money.
Take the Twitter app. Almost all functionality (timeline, lists, profile, moments etc) are here to drive engagement. Then there’s ads and ad tools (I’m simplifying ofc)
2. A company never asks “how many engineers do we need overall?” They look at business cases.
“If we hire 4 engineers, we can build Lists. We expect to reduce churn by 4% annually which results in $15M/yr revenue. The cost of this team is A LOT less than this.”
Thread on 7 things I now have more appreciation for, having experienced them first hand.
1. Marketing. "Build it and they will come" is not how products (or books) are bought. You need a marketing plan.
I put one together late: and delayed the launch to get some of the marketing ideas going. It was worth it, in the end.
2. Media exposure. My own "marketing network" was far smaller compared to exposure on a large publication (like HN). You can't really plan for or rely on this as marketing, but these are bigger waves than one can expect. @philip_kiely has a similar story.
I'm going to attempt to summarize the AWS outage on 25 Nov that impacted a good part of the internet in 6 drawings (from the 2,000+ word detailed postmortem by @awscloud at aws.amazon.com/message/11201/). A thread.
1. Meet AWS Kinesis, the realtime processing backbone of AWS:
2. Incoming requests hit the FE fleet. Each FE machine maintains a shardmap to BE clusters. Machines in this cluster do the realtime processing.
Classic setup. Except for the scale, which we can assume is massive. That "frontend fleet" is likely large. The BE fleet? Gigantic.
3. New FE machines were added to the FE fleet as per usual. A few hours later, things start to work odd. The team investigates and another few hours later they realize the root cause. It's to do with how the FE fleet works.
Each FE machine has a thread open to sync in the fleet:
I've been helping a bootcamp grad frontend dev friend prepare for interviews - they worked as a jr dev for 2 years after the bootcamp. But were out of a job the past 6 months.
They just got an offer as a JS engineer!
Thread on 10 prep resources & job market observations.
1. Interviewing for frontend positions today is HARD. IMO the web is the most in-flux in terms of interviewing approaches between backend and mobile.
You get a huge variety of interviews. Some places dive into React hooks. Others ask vanilla JS. Others algorithms / DS.
The book @intensivedata has got to be the most information-packed one I've read. Summary of all major DB storage techniques, explained in 35 pages in the book. Thread.
1. "Plain old" key-value store in a textfile 2. Indexing a key-value store (e.g. a CSV) with hash indexes (1/6)
3. Segmenting files as they grow via compaction 4. SSTables - sorted string tables (sentence of key-value pairs sorted by keys). 5. LSM-trees (Log-Structured Merge Tree) 6. B-trees: standard storage in many relational/non-relational databases (2/6)
6.1 B-tree reliability & optimization (write-ahead-logs, latches, copy-on-write) 6.2 B-trees vs LSM trees 7. Other indexing approaches: clustered indexes, covering indexes, fuzzy indexing, in-memory DBs (3/6)