model-agnostic > model-specific
Why?
tl;dr: model-agnostic methods allow more flexibility for the underlying ML model.
A post-hoc interpretability method that only works with input and output of the machine learning model and doesn't need access to model parameters, like weights.
Examples: partial dependence plots, feature importance, lime
A built-in or post-hoc interpretability method that requires knowledge about the machine learning model, like the structure or the weights.
Examples: xgboost explainer, attention based nn, linear regression model
Computation of the methods is often faster, bc you can derive knowledge from the structure, e.g. the partial dependence of a feature in a linear model has the same info as the beta weight, but takes more compute.
One day you discover that xgboost works better.
gbm -> xgboost -> LightGBM -> ...
AlexNet -> VGG -> Inceptiion -> ResNets -> ...
Only model-agnostic interpretability methods are robust in these wild times.
At the same time, you enjoy the freedom to level up your ML algorithm under the hood.