• No products in the cart.

LIME: How to Interpret Machine Learning Models with Python

Making machine learning models easy to explain

 

Introduction to Interpretable Machine Learning Models

Interpretability of a machine learning model is very important. In other words, you should be able to make sense of your model, i.e., why it is performing the way it does. How and why did it predict a particular outcome, and which features are essential.

For example, if you have a fraud detection model, then you should be able to answer questions like why a certain transaction was considered fraud or which features were most responsible for making this prediction. Moreover, model interpretability is also important to optimize your model. If you are able to explain your model, then you can improve it accordingly.

 

Introduction to LIME

A lot of libraries and tools exist that make machine learning models more interpretable. In this article, we will discuss one such Python library, i.e., LIME. It stands for Local Interpretable Model-Agnostic Explanations. So, the lime library attempts to explain the individual prediction of a model, i.e., it does not explain the behavior of the entire model.

Moreover, it is model-agnostic, meaning that it is not specific to a single or a group of models, and therefore, can be used for any system.

Let’s go ahead and see how to interpret predictions with LIME in Python. But before that, we need to prepare a dataset and train a model. So, without further ado, let’s get started.

Note: The notebook for this tutorial can be found on my GitHub here.

 

Dataset Preparation and Model Training

First, let’s import the necessary modules.

 

 

For this article, we will use the Breast Cancer Prediction dataset. You can download it from here.

The dataset contains five features, and all of them are numeric. The target variable is binary, where 1 represents cancer detection in the patient, while 0 shows no cancer.

Let’s load the dataset into a pandas DataFrame and display the first five rows.

 

 

 

Let’s now see if the dataset contains missing values.

 

 

 

As you can see, there are no null values.

The next step is to split the dataset into training and testing samples. We reserve 20% of the data for testing.

 

 

 

 

 

If you look at different attribute values, you will observe a difference in their ranges. To solve that issue, we standardize the data using the StandardScalar() method imported from sklearn.preprocessingabove. It centers and scales each feature separately.

 

 

We will use the SVM (Support Vector Machine) classifier to train the model. Let’s import it from the Scikit-Learn library and fit it based on our dataset.

 

 

 

As you can see, the model has an accuracy of 94.7%.

Let’s get to the interesting step now, i.e., interpreting the prediction of our model.

 

Model Interpretation

Now that we have trained our model, we are ready to explain its outcome with LIME.

First, let’s install LIME using the pip install lime command.

Now, we import the lime_tabular module from lime. Next, we create a tabular explainer object using lime_tabular.LimeTabularExplainer().It takes the following arguments:

 

  • training_data: The training samples in a NumPy array format.
  • mode: It takes two values, i.e., classification or regression.
  • feature_names: It represents a list of strings containing attribute names. It must be in the same order as the training data.
  • class: It takes a list of class names. For this dataset, we have two classes, i.e., 0 (no cancer detection) and 1 (cancer detection). It must be arranged in the order that the model has used.

 

 

 

 

We can use the explain_instance()method of the explainer object to get an interpretation of a prediction. It takes the following arguments:

· data_row: It represents a single sample of the data.

· predict_fn: It shows the prediction function that generates the prediction probabilities in the case of classifiers and actual predictions in the case of regression.

Let’s visualize the results by using the show_notebook()function.

Consider the code below in which we pass the first test sample to the explain_instance() method.

 

 

 

As you can see in the first column of the explanation, the model is 96% sure that the current sample has breast cancer. The features that contribute to this decision are mean_radius, mean_perimeter, mean_texture, and mean_area, while mean_smoothness decreases the chance of the test sample to be predicted with cancer. In the second column, we can also see the relative importance of attributes. Moreover, the third part contains the features with their actual values.

We can also see a somewhat similar explanation if we use the as_pyplot_figure() method on the expobject. It returns a matplotlib bar chart.

 

 

 

Here, you can see that since the sample’s value of mean_radius is between -0.66 and -0.29, it increases the likelihood of the test sample being classified as cancerous. Similarly, you can interpret other features as well.

Let’s try another test sample.

 

 

 

 

 

This time the model predicts with 0.97 probability that the test sample does not have cancer. All the features support this prediction, with mean_perimeter having the most relative importance.

 

Conclusion

In this article, we saw how to explain predictions of any black-box model. We covered an introduction to interpreting the machine learning model using the LIME library. There is a lot more you can do with it, for example, interpreting multiclass classification models, CNN, and NLP models, etc. Moreover, currently, LIME supports text data, tabular data, and images.

That’s it for this article. Specifically, we covered the following topics:

· Introduction to Interpretable ML Models

· Introduction to LIME

· Dataset Preparation and Training a Model

· Model Interpretation

August 9, 2021
© 2021 Ernesto.  All rights reserved.  
X