Chip Huyen Profile picture
Aug 22, 2019 13 tweets 4 min read Read on X
To better understand the technical hiring pipelines, I analyzed 15,897 interview reviews for 27 major tech companies on Glassdoor. I focused on interviews for software engineering related roles, both junior and senior levels. These are some of the main findings. (1/n)
Each review consists of:

- result (no offer/accept offer/decline offer)
- difficulty (easy/medium/hard)
- experience (positive/neutral/negative)
- review (application/process/questions)

The largest SWE employers are Google, Amazon, Facebook, and Microsoft.
Strong correlation bw onsite-to-offer rate and offer yield rate (% of candidates who accept their offers). The more selective the company is, the less likely a candidate is to accept their offer. Candidates that pass interviews at FAANG are likely to have other attractive offers.
To read the graph^: 18.83% of onsite candidates at Google get offers, and out of all those with offers, 70% accept. Due to the biases of online reviews, the actual numbers are much lower. The most selective companies are Yelp, Google, Snap, and Airbnb.
Referrals matter, a lot. For junior roles, about 10 - 20% of candidates that get to onsites are referred, with Uber leading the chart with almost 30%. For senior roles, that numbers are higher. Salesforce, Uber, and Cisco all have ~30% of their senior onsite candidates referred.
For junior roles, the biggest source for onsite candidates is campus recruiting. Microsoft & Oracle have >50% of their interviewees recruited through campus events. Google, Facebook, and Airbnb rely less on campus recruiting, but it still accounts ~20- 30% of their onsites.
This means big tech companies concentrate their recruiting effort to a handful of popular engineering schools. Students recruited from those schools then refer their classmates, who in turn refer even more classmates, turning those major tech cos into a Tech Ivy alumni mixer.
Everyone complains that the interview process is broken. It’s not entirely true, at least from the perspective of the candidates who get interviews. 60% of candidates report a positive interview experience.
Candidates with offers are more likely to have a positive experience (correlation 0.75). Companies that give the best candidate experiences are Salesforce, Intel, and Adobe.
The more negative experience a candidate has, the less likely they are to accept the offer. If a candidate who receives an offer has a positive experience, they accept the offer with probability 87%. If that candidate has a negative interview experience, the yield rate is 1/3.
Senior candidates are harder to please than junior candidates. This might explain the abysmal Netflix interview experience. While all other companies keep their shares of senior interviews to under one third, Netflix exclusively hire senior positions.
Companies with the hardest interviews (as perceived by candidates) are Google, Airbnb, and Amazon.
The full write-up with details on data, more results, and biases can be found here: huyenchip.com/2019/08/21/gla…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chip Huyen

Chip Huyen Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chipro

May 6
Really enjoyed LinkedIn's report on what worked and what didn't when deploying LLM applications. 4 takeaways.

1. Structured outputs
They chose YAML over JSON as the output format because YAML uses less tokens. Initially, only 90% of the outputs are correctly formatted YAML. They used re-prompting (asking the model to fix its YAML responses), which increased the number of API calls significantly.

They then analyzed the common formatting errors, added those hints to the original prompt, and wrote an error fixing script. This reduced their errors to 0.01%.Image
2. Sacrificing throughput for latency
Originally, they focused on TTFT (Time To First Token), but realized that TBT (Time Between Token) hurt them more, especially with Chain-of-Thought queries where users don’t see the intermediate outputs.

They found that TTFT and TBT inversely correlate with TPS (Tokens per Second). To achieve good TTFT and TBT, they had to sacrifice TPS.
3. Automatic evaluation is hard
One core challenge of evaluation is coming up with a guideline on what a good response is. For example, for skill fit assessment, the response: “You’re not a good fit” is correct, but not helpful.

Originally, evaluation was ad-hoc. Everyone could chime in. That didn’t work. They then have linguists build tooling and processes to standardize annotation, evaluating up to 500 daily conversations and these manual annotations guide their iteration.

Their next goal is to get automatic evaluation, but it’s not easy.
Read 5 tweets
Oct 7, 2020
Some asked me about concept drift so here you go.

A predictive ML model learns theta to output P(Y|X; theta).

Data drift is when P(X) changes: different data distributions, different feature space.

Ex: service launched in a new country, expected features becoming NaNs.

1/5
Label schema change is when Y changes: new classes, outdated classes, finer-grained classes. Especially common with high-cardinality tasks.

Ex: there’s a new disease to categorize.

2/5
Model drift is when P(Y|X) changes: same inputs expecting different outputs.

Ex: when users searched for Wuhan pre-covid, they expected very different things from what they do now.

Model drift can be cyclic, e.g. ride-share demands weekday vs. weekend.

3/5
Read 5 tweets
Sep 29, 2020
When talking to people who haven’t deployed ML models, I keep hearing a lot of misperceptions about ML models in production. Here are a few of them.

(1/6)
1. Deploying ML models is hard

Deploying a model for friends to play with is easy. Export trained model, create an endpoint, build a simple app. 30 mins.

Deploying it reliably is hard. Serving 1000s of requests with ms latency is hard. Keeping it up all the time is hard.

(2/6)
2. You only have a few ML models in production

Booking, eBay have 100s models in prod. Google has 10000s. An app has multiple features, each might have one or multiple models for different data slices.

You can also serve combos of several models outputs like an ensemble.

(3/6)
Read 6 tweets
Mar 7, 2020
I've been talking to a lot of people looking to join/having joined startups and I'm flabbergasted by how often people think joining startups is a get rich quick scheme. Here's the math why it doesn't work and what to look for when joining startups. (1/n)
Equity: anywhere 0.001% - 10%. A friend recently joined a 15-pax seed startup that offered 4%/4 years + lot of $. He'd be the ML engineer. They need him to raise A. It looks good on paper but do you want a company where you're clearly the best at what you want to learn? (2/n)
For startups with product-market fit, star founders, top VCs (think Asana, Zoom), if you're the ~15th engineer, expect equity << 0.1%/4years. After subsequent rounds, it's diluted to < 0.05%. If startup is sold for $1B, which is rare, you'd make < 0.5M/4years. (3/n)
Read 5 tweets
Oct 28, 2019
To learn how to design machine learning systems, I find it really helpful to read case studies to see how great teams deal with different deployment requirements and constraints. Here are some of my favorite case studies.
Topics covered: lifetime value, ML project workflow, feature engineering, model selection, prototyping, moving prototypes to production. It's completed with lessons learned and looking ahead!

medium.com/airbnb-enginee…
Netflix streams to over 117M members worldwide, half of those living outside the US. The company uses machine learning to predict the network quality, detect device anomaly, handle predictive caching.
medium.com/netflix-techbl…
Read 7 tweets
Aug 3, 2019
This thread is a combination of 10 free online courses on machine learning that I find the most helpful. They should be taken in order.
1. Probability and Statistics by Stanford Online
This self-paced course covers basic concepts in probability and statistics spanning over four fundamental aspects of machine learning: exploratory data analysis, producing data, probability, and inference.
online.stanford.edu/courses/gse-yp…
2. Linear Algebra by MIT

Hands down the best linear algebra course I’ve seen, taught by the legendary professor Gilbert Strang.
ocw.mit.edu/courses/mathem…
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(