Krishna Gade Profile picture
Founder & CEO at @fiddlerlabs, Building Trust into AI. Prior: @facebook, @pinterest, @twitter, @microsoft
Jan 20, 2022 16 tweets 7 min read
I was a first-time founder when we started @fiddlerlabs, and we founded it with a big mission to build trust in AI.

This is a thread on the journey we went through in the last 3 years running the company and the lessons we learned. 🧵 First up, the startup journey is not for the faint-hearted. The highs are super high and lows are super low. I am lucky to have got a great co-founder in @amitpaka who has been there throughout this journey.

So, if you’re thinking of getting a co-founder - you should! /2
Dec 2, 2021 15 tweets 4 min read
Zillow has recently shut down its AI-enabled iBuyer program that overpaid thousands of houses in summer 2021, along with laying off 25% of its staff.

Are teams managing these risks w/ AI well?

THREAD: How to build a robust Model Risk Management (MRM) process in your company? 1/ Zillow invested a ton of money into its AI-enabled home-flipping program called Zillow Offers.

They bought up thousands of houses per month, whereupon the homes would be renovated and sold for a profit.

Unfortunately, things didn’t go to plan.

bloomberg.com/news/articles/…
Oct 7, 2021 7 tweets 2 min read
I was an engineer on Facebook's News Feed and this is NOT how recommender systems work.

While users can set some explicit preferences, implicit user activity on the app is the bulk of the signal that gets fed into the AI systems which control & rank the feed. /thread So, if you're engaging with a certain type of content of a certain set of friends - the stories from those sources get ranked higher than others. This is true with Facebook or YouTube or any other recommender system. /2
Feb 11, 2021 11 tweets 4 min read
I was an eng leader on Facebook’s NewsFeed and my team was responsible for the feed ranking platform.

Every few days an engineer would get paged that a metric e.g., “likes” or “comments” is down.

It usually translated to a Machine Learning model performance issue. /thread 2/ The typical workflow to diagnose the alert by the engineer was to first check our internal monitoring system Unidash to see if the alert was indeed true and then dive into Scuba to diagnose it further.
Nov 25, 2019 11 tweets 2 min read
With the last week's launch of Google Cloud’s Explainable AI, the conversation around #ExplainableAI has accelerated.

But it begs the questions - Should Google be explaining their own AI algorithms? Who should be doing the explaining? /thread 2/ What do businesses need in order to trust the predictions?

a) They need explanations so they understand what’s going on behind the scenes.

b) They need to know for a fact that these explanations are accurate and trustworthy and come from a reliable source.
Oct 10, 2019 14 tweets 3 min read
We've been working on #ExplainableAI at @fiddlerlabs for a year now, here is a thread on some of the lessons we learned over this time. 2/ There is no consensus on what "Explainability" means. And people use all of these words to mean it.
Sep 19, 2019 11 tweets 3 min read
It is amazing to see so many applications of game theory in modern software applications such as search ranking, internet ad auctions, recommendations, etc. An emerging application is in applying Shapley values to explain complex AI models. #ExplainableAI Shapley value was named after its inventor Lloyd S. Shapley. It was devised as a method to distribute the value of a cooperative game among the players of the game proportional to their contribution to the game's outcome.