Deedy Profile picture
Sep 24, 2019 6 tweets 2 min read Read on X
1/ The Fairness of High-Skilled Immigrants Act, 2019, or #HR1044/#S386, which would've removed country caps on green cards in the US for Indian and Chinese nationals, particularly bringing the wait time for Indians from 150yrs to ~10...
2/ ... was blocked in the Senate by Sen. Dave Perdue after bipartisan support in the House. If you were Indian and moved to the US for an undergraduate degree in 2001, you'd be 36, have spent half your life in the country and not have a green card.
3/ You might be married with kids but if you lose your job, you might have to leave your family after paying for a college degree and 14yrs worth of usually fairly high taxes. Isn't that absurd?
4/ Despite being Indian, and a beneficiary of this bill, there are problems with this bill. One, most Indians in the backlog are not high skilled tech workers, but cheap outsourced labour from IT consultancies like Wipro and Infosys.
5/ Two, without a smoother cap removal transition plan, this would essentially flood the green card quota with Indians for the next ~10yrs, throttling competent candidates of other nationalities.
6/ If those two issues are fully addressed, I this bill will be unanimously favored and @sendavidperdue will let it pass and hopefully Trump will sign it!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Deedy

Deedy Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @deedydas

Jul 22
The IMO is the hardest high school Math test. A lesser known sibling, the IOL (International Olympiad for Linguistics), starts tomorrow!

Students are asked to translate lesser-known languages purely using logic. 5 problems, 6hrs.

The problems seem absolutely impossible. Image
And if that wasn't hard enough, they also have a 3-4hr team challenge!

This entire idea of the self-sufficient linguistics problem originated in the 1960s by Russian linguistics at Moscow State University.
Image
Image
2023 Individual paper:
Full problem catalog: ioling.org/booklets/iol-2…
ioling.org/problems/by_ye…
Read 4 tweets
Jul 13
HUGE Immigration News for International Entrepreneurs!!

If you own 10%+ of a US startup entity founded <5yrs ago with $264k+ of funding, you+spouses of up to 3 co-founders can come work in the US for 2.5yrs with renewal to 5yrs.

Startups globally can now come build in SF!

1/5 Image
A "qualified investor" has to be a US citizen or PR who has made $600-650k in prior investments with 2+ startups creating 5+ jobs or generating $530k revenue growing 20% YoY.

If you don't meet the funding requirement, don't lose hope. You CAN provide alternate evidence.

2/5 Image
For the renewal to 5yrs, you need to maintain 5% ownership, create 5+ jobs and reach $530k+ revenue growing 20% YoY or $530k+ in investment, although alternative criteria can be used.

3/5
Read 6 tweets
Jun 28
A theory of why Claude 3.5 Sonnet is insane at coding: mechanistic interpretability.

Anthropic showed that there are clever ways to understand what the weights of LLMs do and "steer" them to behave differently.

Doing this on Sonnet may be why it crushes it at code:

🧵

1/12 Image
When you try to understand each weight of a model on their own, they don't make sense.

This is superposition: each neuron represents many features, but combinations of them may represent a single "feature" that we can make sense of.

2/12
In order to make sense of it, you can train a sparse autoencoder (SAE) on the weights.

The basic idea is to encode data into a bunch of numbers and try to recreate the original data from it, and keep adjusting the numbers with math to get back the data as best as possible.

3/12
Read 12 tweets
Jun 25
No one talks about the Yahoo mafia but they’re so cracked

Jeff Weiner—CEO LinkedIn
Gideon Yu—CFO Facebook, YouTube
Stewart Butterfield—Founder Slack
Tim Tully—CTO Splunk Partner Menlo
Amjad Masad—Founder Replit
Jess Lee—Partner Sequoia
Brian Acton, Jan Koum—Founder WhatsApp

1/3
Amr Awafallah—Founder Cloudera, Vectara
Dave Goldberg—CEO SurveyMonkey [RIP]
Dan Rosensweig—CEO Chegg
Eddie Wu—CEO Alibaba
Brad Garlinghouse—CEO Ripple
Blake Irving—CEO GoDaddy
Chad Dickerson—CEO Etsy
Rich Riley—CEO Shazam
Greg Coleman—President Buzzfeed

2/3
Alex Stamos—CISO Facebook
Dawn Airey—CEO Getty Images
Jennifer Dulski—President Change org
Arjun Sethi—Founder Tribe Capital
Farhad Massoudi—Founder Tubi
Marissa Mayer—CEO Sunshine
Qi Lu—COO Baidu
Andrew Braccia—Partner Accel

3/3
Read 8 tweets
Jun 17
How to understand Github code repos with LLMs in 2 mins.

Even after 10yrs of engineering, dissecting a large codebase is daunting.
1 Dump the code into one big file
2 Feed it to Gemini-1.5Pro (2M context)
3 Ask it anything.

Here's, I dissect DeepFaceLab, a deepfake repo:

1/11 Image
Clone the repo and ask Claude to generate some bash to dump the raw contents into one file.

The final dump has the filenames and their contents.

2/11 Image
Ask Claude to generate a list of interesting ways to understand a code repository.
Feed the contents and the questions to Gemini 1.5 Pro, with a huge 2M token context window.

3/11 Image
Read 12 tweets
Jun 11
ChatGPT in Siri was cool. Apple's own models is cooler.

~3B param Apple On-Device and larger Apple Server
—Time to first token: 0.6ms/tok
—Generation speed: 30 tok/s, wo speculation
—On-the-fly loaded 10-100MB LoRA adapters for each use-case

Benchmarks speak for itself.

1/5 Image
Benchmarks

Granted, a lot of the benchmarks Apple showcased seem cherry-picked. They used this arbitrary writing ability benchmark, with no real citation. Using human raters is great, but would've loved to see MMLU and other industry-standard benchmarks, not just IFEval.

2/5


Image
Image
Image
Image
Technical details I

— Inference optimization: low-bit palletization using LoRA adapters with mix of 2-bit and 4-bit config and dynamic bit rate selection
— Shared input and output embedding tables to reduce memory requirements (49k for on-device and 100k for server)

3/5
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(