, 11 tweets, 3 min read
My Authors
Read all threads
It is amazing to see so many applications of game theory in modern software applications such as search ranking, internet ad auctions, recommendations, etc. An emerging application is in applying Shapley values to explain complex AI models. #ExplainableAI
Shapley value was named after its inventor Lloyd S. Shapley. It was devised as a method to distribute the value of a cooperative game among the players of the game proportional to their contribution to the game's outcome.
Suppose, 10 people came together to start a company that produces some revenue. How would you distribute the revenue of the company among the 10 people as a payoff so that the payoffs are fair and appropriate to their contributions?
A naive implementation of this idea would be to pay each member according to how much she increases the value of the group of all other members when she joins it. So the payoff of the member A would be v(A) = v(Group) - v(A) where v() is a value function defined.
However, this approach does not take interactions and network effects into considerations. We see that even in sports teams, where a player's effectiveness or value goes up or down depending on some other players being on the team.
Another approach could be to fix an ordering of the players and pay each one according to how much she contributes to the group, formed by her predecessors. So player 1 receives v(1), player 2 receives v(1,2) - v(1), player 3 receives v(1,2,3) - v(1,2)....
This also suffers from a problem that players that have an identical impact don't receive identical payoffs. Shapley’s insight, which led to the definition of the Shapley value, was that this dependence can be eliminated by averaging over all possible orderings, of the players.
The approach works surprisingly well and provides some nice guarantees! For example, if there is a dummy player who has not contributed, she would receive a payoff of 0. And all the Shapley values are additive and can be added up to get the total value of a group or subgroup.
Now when you apply this concept to Machine Learning, one can model the ML algorithm as a game and the constituent features as the players. Since for complex AI models, it is difficult to know why the Model came up with a prediction, Shapley values can help us take a look inside.
In this setting, Shapley value of a feature is the expected marginal contribution of each feature on the model prediction. Now let us say a complex Credit Lending model denies someone a loan, we can look at the Shapley values of the features like FICO, Salary, $ requested, etc.
Shapley value is the basis of many ML explanation algorithms that people are using today in practice. At @fiddlerlabs we're working on some cool research in this space, feel free to ping me or @ankurtaly if you want to learn more!
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Krishna Gade

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!