1/ My notes from #MLOpsWorld2021. It covers:
- Data-Centric Pipeline
- Scalable ML Platforms
- ML Org Failure Modes
- Model Monitoring/Debugging
- Data Logging
- Programmatic Labeling
My favs:
- Under-promise and over-deliver
- Work where there are no words for what you do
- The reward for good work is more work
- Aim to have others respect you
- Being wise means having more questions than answers
- Compliment people behind their back👇 kk.org/thetechnium/99…
- You are only as young as the last time you changed your mind
- Be strict with yourself and forgiving of others
- Calm is contagious
- Always cut away from yourself
- Measure twice, cut once
- Work for something much larger than yourself 👇
Apr 17, 2021 • 21 tweets • 3 min read
Recently finished reading @AdamMGrant so I decided to jot down these practical takeaways to improve my rethinking skills. Might be relevant to you as well! 👇
1 - Think Like A Scientist:
When forming an opinion, resist the temptation to preach, prosecute, or politick. Treat your emerging view as a hunch or a hypothesis and test it with data.
Apr 14, 2021 • 13 tweets • 10 min read
1/ My notes from @scale_AI Transform✍️It covers:
- Building good data
- Future of ML frameworks
- Challenges for scalable deployment
- How to assess ML maturity
Thanks @alexandr_wang and team for organizing the best AI conference of 2021 thus far. Enjoy!
jameskle.com/writes/scale-t…2/ @AndrewYNg: "The single most important thing that an MLOps team needs to do is to ensure consistently high-quality data throughout all stages of the ML project life cycle."
Aug 25, 2020 • 7 tweets • 8 min read
~ New Post ~
During this quarantine time, I binge-watched @Stanford#CS330 lectures taught by the brilliant @chelseabfinn. This blog post is a summary of the key takeaways on #Bayesian Meta-Learning that I’ve learned. #AtHomeWithAI
(1/7) 👇
Bayesian meta-learning generates hypotheses about the underlying function, samples from the data distribution, and reasons about model uncertainty. It is suitable for problems in safety-critical domains, exploration strategies for meta-RL, and active learning.