, 14 tweets, 3 min read Read on Twitter
I used to think that making explainable recommendations required an interpretable model. I was wrong, and in order to understand why, you need to understand a few things about the structure of industrial recommender systems. 🧵
The structure I used to picture involved using interaction data and a model to generate vectors for users and items (matrix factorization, word embeddings, etc), and then making recs by finding items similar to each user vector with approximate nearest neighbor search.
The picture above is the sort of thing you'll often see in introductory texts about recommender systems, and while it's sufficient to generate a list of recs, it doesn't provide an easy way to explain why any particular item was selected.
So I figured that explainable recs required a model which was introspectable somehow and used the sorts of metadata that show up in explanations. And there are indeed contextual recommender models capable of incorporating side information (e.g. factorization machines.)
However, that's a somewhat simplified/condensed picture of how industrial recommender systems work. In practice, they tend to have 2-4 distinct phases, including:
• Candidate selection
• Filtering
• Scoring
• Ordering
Candidate selection generates a set of potential recs, possibly from several different sources or methods. A vector lookup for user -> item similarity might be one of them, but they could also be based on seed items ("More like...") or graph traversals ("Your friend liked...")
Filtering removes any candidates that wouldn't be appropriate to recommend, for whatever reason. They might be items that the user has already interacted with, items in a language the user doesn't know, items with explicit content, and so on.
After filtering, each of the remaining items can be given a score to indicate its relevance for this particular user. A scoring model might take into account not only the user and item history and attributes, but also the candidate source and corresponding explanation.
Finally, items are ordered based on their relevance, which is often more involved than sorting descending by score. The ordering might apply some exploration, trade off relevance for a diverse list of recs, or calibrate item attribute distributions to match the user's history.
Okay, so why doesn't providing explanations require an interpretable model? Well, as alluded to above, explanations can be generated in the candidate selection phase, based on the heuristics, relationships, or seed items used to select potential recommendations.
While candidate selection might involve a model (e.g. to produce vector embeddings for similarity search), the explanations actually come from outside the model, and are based on our human knowledge of how to find potentially relevant items.
Since we're still early in the process of generating recs, the explanations don't even need to be incredibly relevant to the user. We can use the later process of relevance scoring to select the best explanations for this user and weed out explanations that don't make sense.
On the relevance side, people care about knowing "why this item?", but much less about knowing "why this order?" An interpretable relevance model is helpful for understanding and iterating on relevance scores, but it doesn't help much with providing user-facing explanations.
So that's how I learned to stop worrying and love the model (even if it isn't readily interpretable.) It turns out explanations are a lot easier than I thought they were, because I assumed they were fundamentally an ML problem, when they're actually a product design problem.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Karl Higley
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!