Interpretability model: this classification is based on two methods
which are intrinsic/post-hoc methods and model-specific agnostic models.
Intrinsic is applied intrinsically interpretable models; these are
models whose structure is simple and thus can easily be interpreted,
while post-hoc models are used to analyze a specific model after
training. A good example is the permutation feature.
Model-specific/model agnostic method is another sub-domain under the
interpretability model method. They are model-specific and limited to a
specific machine learning model, while model-agnostic methods can be
applied to any machine learning model after training.
Local and Global interpretability models depend on the scope of the
trained model, which includes aspects like knowledge of the model, the
algorithm used, and the data that shall be used in the prediction. This
is in the case of global, while local interpretability chooses a
particular instance during the execution of the model when a prediction
was made. Thus the model is interpreted in that instance. Global
explanations give a holistic view, while local descriptions examine
single instances in the model. (C Molnar 2019).