image

Explainable AI: The Need for Model Interpretability

Model complexity leads to improved accuracy but reduces model interpretability.

Over the past decade, focus on data science and machine learning (ML) has been radically shifted from theory to real-world applications as more powerful machines, learning algorithms and the vast amount of data is available. The improved performance of ML algorithms is the direct result of increased complexity. A prime example is the deep learning paradigm, where complex hierarchical models are built by iteratively adding activation layers. This clearly states a trade-off between the performance of a machine learning model and its ability to produce explainable and interpretable predictions. Therefore, on the one-hand, we have the so called black-box models where the complex structure hinders model interpretability, such as deep learning and ensembles. On the other hand, there is the class of so called white-box or glass-box models, which easily produce explainable results, such as linear and decision tree based models.