INTRODUCTION
Interpretable machine learning methods can be used to discover new
knowledge, debug and justify the models and their predictions, control
the model, and improve it. (Molnar, C., Casalicchio, G., & Bischl, B,
2020). While a lot of machine learning research has been focusing on the
predictive performance of the models. One lagging aspect, though not by
much of it being left utterly behind, is the interpretability of the
models used in making the prediction. (Molnar, C., Casalicchio, G., &
Bischl, B, 2020). The diagram below shows some machine learning models’
interpretability vs. accuracy tradeoffs. It offers a need to improve on
interpretability aspects of machine learning. From the chart, we can see
a lot of the black box models are models that consider nonlinear
relationships in their structure. In contrast, the interpretable ones
are linear in design and have well-defined relationships. As you can
see, all linear regression models are highly interpretable while Deep
learning models aren’t; hence need to bridge this gap.