Emil Wallner Profile picture
Founder @palettefm. former resident at @googlearts.
fly51fly Profile picture ShashanK🤺 Profile picture Tapan jain Profile picture Bharat Raghunathan (@bharatr@sigmoid.social) Profile picture 4 subscribed
Jul 1, 2020 11 tweets 4 min read
🧵Income strategies to support your ML research:

1. Github Sponsors / Patreon

@calebporzio is making $100k/yr by having a free open source project. He gives access to video tutorials for users who sponsor him.

calebporzio.com/i-just-hit-dol… 2. Production Maintenance Fees

Spend a few months getting really good at turning Colab notebooks into APIs in Kubernetes. Spend a year putting models in production, and then charge a monthly fee to maintain the clusters.
Jun 24, 2020 9 tweets 2 min read
The coming education backbone for tech talent:

1. Peer-to-peer grading

It works. There are 50 schools without teachers. At this growth rate, more than a million will graduate from these schools in less than a decade. 2. Interviewing-as-a-Service (IaaS)

In a remote-first market, interviewing will be centralized. Interview with a few IaaS providers, and have your evaluation sent to all the world’s companies looking for your skillset.
Nov 11, 2019 13 tweets 5 min read
François Chollet’s core point: We can't measure an AI system's adaptability and flexibility by measuring a specific skill.

With unlimited data, models memorize decisions. To advance AGI we need to quantify and measure ***skill-acquisition efficiency***.

Let’s dig in👇 In the 1970s, many thought that chess reflected the entire scope of rational human thought. Solving chess with computers would lead to major leaps in cognitive understanding. But after IBM's DeepBlue, they realized they didn't have a better understanding of human thinking.
Oct 17, 2019 12 tweets 5 min read
Machine learning portfolio tips

1. Good ideas come from ML sources that are a bit quirky.

- NeurIPS from 1987 - 1997
- Stanford’s CS224n & CS231n projects
- Twitter likes from ML outliers
- ML Reddit’s WAYR
- Kaggle Kernels
- Top 15-40% papers on Arxiv Sanity 2. Time is your unfair advantage.

Untangle student papers, e.g. back-propagation was introduced by a Finnish Master student. Revisit old ideas that might work now (ANN), or tinker with the StyleGAN-BigGAN hybrid that was just released in a Kaggle kernel.
Aug 16, 2019 7 tweets 3 min read
Tips for AI writers:

1. Spend 30% of your effort on skimming all student ML papers (e.g. Stanford NLP CS224n) the past 3 years and prototype your favorites

The idea is everything. Pick an area you are interested in and ideally something that has a visual aspect to it Most of my 'on the top of my mind' ideas were bad in retrospect. Skimming 100s of student papers will give you an overview of what's interesting.

Student papers are overlooked, easy to understand, and have good compute constraints.