Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead Rudin et al., arXiv 2019
With thanks to Glyn Normington for pointing out this paper to me.
It’s pretty clear from the title alone what Cynthia Rudin would like us to do! The paper is a mix of technical and philosophical arguments and comes with two main takeaways for me: firstly, a sharpening of my understanding of the difference between explainability and interpretability, and why the former may be problematic; and secondly some great pointers to techniques for creating truly interpretable models.
There has been a increasing trend in healthcare and criminal justice to leverage machine learning (ML) for high-stakes prediction applications that deeply impact human lives… The lack of transparency and accountability of predictive models can have (and has already had) severe consequences…
Defining terms
A model can be a black box for one of two reasons: (a) the function that the model computes is far too complicated for any human to comprehend, or (b) the model may in actual fact be simple, but its details are proprietary and not available for inspection.
In explainable ML we make predictions using a complicated Continue reading


