Trusting Machine Learning Models with LIME

Data Skeptic

Episode | Podcast

Date: Fri, 19 Aug 2016 15:00:00 +0000

<p>Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.</p> <p>The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.</p> <p>In this episode, <a href="http://homes.cs.washington.edu/~marcotcr/">Marco Tulio Ribeiro</a> joins us to discuss how <a href="https://github.com/marcotcr/lime">LIME (Locally Interpretable Model-Agnostic Explanations)</a> can help users trust machine learning models. The accompanying paper is titled <a href="http://arxiv.org/abs/1602.04938">"Why Should I Trust You?": Explaining the Predictions of Any Classifier</a>.</p>