Christoph Molnar 🦋 christophmolnar.bsky.social Profile picture
Author of Interpretable Machine Learning https://t.co/gJKlTA2deP | Newsletter: https://t.co/6fQuMr8yI8
7 subscribers
Oct 2, 2023 10 tweets 3 min read
Which one is the better machine learning interpretation method?

LIME or SHAP?

Despite the gold rush in interpretability research, both methods are still OGs when it comes to explaining predictions.

Let's compare the giants. Both LIME and SHAP have the goal of explaining a prediction by attributing it to the individual features. Meaning each feature gets a value.

Both are model-agnostic and work for tabular, image, and text data.

However the philosophies of how to make these attributions differ. Image
Sep 26, 2023 9 tweets 3 min read
Machine learning interpretability from first principles:

• A model is just a mathematical function
• The function can be broken down into simpler parts
• Interpretation methods address the behavior of these parts

Let's dive in. Image A machine learning model is a mathematical function. It takes a feature vector and produces a prediction.

But writing down the function isn't practical, especially for complex models like neural networks or random forests. Even if you could, the formula isn't interpretable
Jul 25, 2023 7 tweets 3 min read
My favorite analogy to explain SHAP from explainable AI.

We start with a one-dimensional universe. Objects can move up or down. For better display, we move them left (=down) or right (=up).

There are only two objects in this simplified universe

A center of gravity
A planet Image The center of gravity is the expected prediction for our data E(f(X)). It’s the center of gravity in the sense that it’s a “default” prediction, meaning if we know nothing about a data point, this might be where we expect the planet (=the prediction for a data point) to be. Image
May 9, 2023 12 tweets 3 min read
Bayesian modeling from first principle and memes.

Let's go. The principle from which you can understand a lot of the basics in Bayesian modeling:

In Bayesian statistics, model parameters are random variables.
May 2, 2023 9 tweets 2 min read
It took me a long time to understand Bayesian statistics.

So many angles from which to approach it: the Bayes' Theorem, probability as a degree of belief, Bayesian updating, priors and posteriors, ...

But my favorite angle is the following first principle : > In Bayesian statistics, model parameters are random variables.

The "model" here can be a simple distribution.

The mean of a distribution, the coefficient in logistic regression, the correlation coefficient – all these parameters are variables with a distribution.
May 1, 2023 4 tweets 1 min read
Modeling Mindsets summarized

Statistical Modeling – Reason Under Uncertainty
Frequentism - Infer "True" Parameters
Bayesianism – Update Parameter Distributions
Likelihoodism – Likelihood As Evidence
Causal Inference – Identify And Estimate Causes Machine Learning – Learn Algorithms From Data
Supervised Learning – Predict New Data
Unsupervised Learning – Find Hidden Patterns
Reinforcement Learning – Learn To Interact
Deep Learning - Learn End-To-End Networks
Apr 24, 2023 12 tweets 3 min read
I make a living writing technical books about machine learning.

Naturally, I constantly ask myself what ChatGPT means for my job and how it can make my life easier.

Today I finally tried out GPT-4 to help me with a book draft.

A non-hype, real-world application of ChatGPT. Context: I write a book about SHAP, a technique for explainable machine learning.

I have code examples, all the materials, and references, and a "bad draft" of the book exists.

It's already readable end-to-end, but it's a very sloppy draft with lots of errors and clutter.
Mar 24, 2023 9 tweets 1 min read
Machine Learning isn't always easy.

But it can be!

Follow these 7 tips that feel illegal.

1/n
Your machine learning model misclassified some test data? Change the labels in question to conform to the model instead of improving the model. It's faster and guaranteed to improve performance.

2/n
Feb 23, 2023 6 tweets 1 min read
A head-scratcher for machine learning model selection:

Multiple predictive models might have (roughly) equal predictive performance.

So what's the deal? If they all have equal performance, we can just pick one and be done.

If it weren't for the Rashomon effect. While the models might be equal in performance, they might make different predictions.

I wouldn't expect that all predictions are completely different, but for less certain data points the predictions may vary across models, depending on the inductive bias of each model.
Feb 17, 2023 11 tweets 3 min read
Will it rain tomorrow?

Best check you weather forecast for that, powered by physics-based simulations and supercomputers.

But one day, your decision to bring an umbrella may be based on deep learning. Forecasting rain or the weather in general is difficult. It's a complex system of a 3D atmosphere full of individual molecules. It's fluid. It's chaotic.

That's why its difficult to forecast the weather reliably many days from now.
Feb 16, 2023 4 tweets 1 min read
I once attended a workshop at ECML about reviewing papers. A question from the audience "How much time to do a review?"

A valid question, bc if you review for a conference, you might get 2 - 5 or even more papers to review.

The audience laughed in disbelief abt the answer. The speaker said "1 hour".

ONE HOUR

For skimming the paper, reading it more carefully, comprehending the formulas, understanding the research, critiquing the research, writing the review, loging into some archaic website, pasting your review, rating the paper on multiple scales
Jan 17, 2023 8 tweets 2 min read
One of the biggest misconceptions of explainable AI / interpretable machine learning:

The belief that the "explanation" of a model (output) must be similar to how humans explain their actions.

Here is a more realistic view of the IML/XAI landscape:

1/n
Many approaches that I would bundle under the XAI/IML umbrella are not motivated by "explain-like-a-human-would":

- Model audits
- Sensitivity analysis
- Feature attributions
- Functional decomposition
- Feature effect visualizations
- Feature importance rankings
-...

2/n
Jan 3, 2023 9 tweets 2 min read
Regression models usually just output a point prediction.

That's a problem.

Because the prediction could be spot on or it could be a wild guess. To distinguish these two scenarios, we need uncertainty quantification.

A solution: Conformal Prediction
1/n
Conformal prediction can turn the output of a regression model into prediction intervals.

Good news: these intervals come with a coverage guarantee for the true outcome.

More good news: Conformal prediction can be applied post-hoc to any model.
2/n
Nov 21, 2022 11 tweets 2 min read
I found it unexpectedly difficult to get into causal inference.

(still a beginner, I guess)

Here are a few insights that helped me in understanding causal inference. 🧵 #1 Start learning about directed acyclic graphs (DAGs) and the implications of blocking the "flow" between variables. High return of investment when getting started with causal inference.
Nov 4, 2022 10 tweets 3 min read
Math, code, theory - all so serious.

Let's do something more fun and look at machine learning and statistical modeling through stories.

🧵 7 short stories with insights on modeling mindsets Machine learning is considered more algorithmic than classical statistical modeling.

But seeing machine learning through the lens of statistics (statistical learning) is immensely insightful, as this machine learner is finding out:

Oct 19, 2022 6 tweets 2 min read
The simplest way to evaluate model performance is to pick an off-the-shelf metric: MSE, F1, R2, ...

But it's worth pausing before using an off-the-shelf metric and considering creating one yourself.

Here's why and when to consider a custom metric 👇 Modeling often focuses on feature selection and engineering, model selection and so on.

But if the performance metric isn't right, you are literally optimizing your model for the wrong things.

The worst: Everything will look fine in training/evaluation.
Oct 4, 2022 14 tweets 3 min read
SHAP, LIME, PFI, ... you can interpret ML models with many different methods.

It's all fun and games until two methods disagree.

What if LIME says X1 has a positive contribution, SHAP says negative?

A thread about the disagreement problem, and how to approach it: First, some background. The disagreement problem has been named and studied in this paper:

arxiv.org/abs/2202.01602

They studied attribution methods for explaining predictions, but the disagreement problem applies to other methods as well.
Sep 21, 2022 6 tweets 2 min read
Machine learning sucks at uncertainty quantification.

But there is a solution that almost sounds too good to be true:

conformal prediction

• works for any black box model
• requires few lines of code
• is fast
• comes with statistical guarantees

A thread 🧵 Conformal prediction is a method for uncertainty quantification of machine learning models.

The method takes a heuristic uncertainty score and turns it into a rigorous one.
Sep 20, 2022 9 tweets 3 min read
Interpretable machine learning is a mishmash of different methods.
I use mental models to understand how interpretation methods work.

My favorite mental model is like an x-ray view that reveals the core of any interpretation method: Functional decomposition.

A thread🧵 A prediction model is a function f that maps from p features to 1 output.

Interpretation often means breaking f down into lower dimensional parts. Partial dependence plots reduce f to 1 feature through marginalization.
Sep 19, 2022 8 tweets 2 min read
Supervised learning "only" gives you a prediction function.

But with the right tools, you'll get a lot more:

• Uncertainty quantification
• Causality
• Interpretability
• Analysis of variance
• ...

And the best news: tools in this thread work for any black box model

👇 Uncertainty quantification

Conformal prediction turns "weak" uncertainty scores into rigorous prediction intervals.

For example:

• class probabilities -> classification sets
• quantile regression -> conformalized quantile regression

arxiv.org/abs/2107.07511
Sep 15, 2022 14 tweets 3 min read
Bayesians versus Frequentists is an ancient debate.
But have you heard of likelihoodism?

🧵 A thread on likelihoodism, why no one uses it, and how it helps to understand the Bayesian versus Frequentist debate better. Likelihoodists honor the likelihood function above all else.
• They reject prior probabilities. That's a big middle finger to the Bayesian approach.
• Evidence from the data must only come through the likelihood. That's why they reject frequentist inference.