1/ The Fairness of High-Skilled Immigrants Act, 2019, or #HR1044/#S386, which would've removed country caps on green cards in the US for Indian and Chinese nationals, particularly bringing the wait time for Indians from 150yrs to ~10...
2/ ... was blocked in the Senate by Sen. Dave Perdue after bipartisan support in the House. If you were Indian and moved to the US for an undergraduate degree in 2001, you'd be 36, have spent half your life in the country and not have a green card.
3/ You might be married with kids but if you lose your job, you might have to leave your family after paying for a college degree and 14yrs worth of usually fairly high taxes. Isn't that absurd?
4/ Despite being Indian, and a beneficiary of this bill, there are problems with this bill. One, most Indians in the backlog are not high skilled tech workers, but cheap outsourced labour from IT consultancies like Wipro and Infosys.
5/ Two, without a smoother cap removal transition plan, this would essentially flood the green card quota with Indians for the next ~10yrs, throttling competent candidates of other nationalities.
6/ If those two issues are fully addressed, I this bill will be unanimously favored and @sendavidperdue will let it pass and hopefully Trump will sign it!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Microsoft just leaked their official compensation bands for engineers.
We often forget that you can be a stable high-performing engineer with
great work-life balance, be a BigTech lifer and comfortably retire with a net worth of ~$15M!
The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
1. Generating tokens by rewriting high-quality tokens with LLMs in pre-training 2. Mining 3000+ MCPs and using LLM-generated personas to improve agentic tool calling 3. 10,000 parallel Kubernetes sandboxes to solve Github issues 4. New scaling laws for sparsity in MoE models
5. RL with verifiable rewards (RLVR) for math, coding, safety with self-critique model with long-reasoning penalty, causing direct, desisive answers 6. Training recipe of 4k sequences, then 32k then 128k with YaRN 7. High temp during initial RL training to promote exploration
Wanted to use Gemini 2.5 Pro too but on AI Studio, it did not search the web. I’ve kicked off a Deep Research and will report back under this thread.
Prompt:
“Check on the odds on Polymarket and tell me the most mispriced assets I should bet on from first principles reasoning. You have $1000.
Please do deep research and present precise odds on each bet. Use advanced math for trading. Draw research from authoritative sources like research and unbiased pundits. Size my bets properly and use everything you know about portfolio theory. Calculate your implied odds from first principles and make sure you get an exact number.
Your goal is to maximize expected value. Make minimum 5 bets. Write it in a table.”