Trust-In Machine Learning Models
Trust-In Machine Learning Models
Trust-In Machine Learning Models
The value is not in software, the value is in data, and this is really important for every
single company, that they understand what data they’ve got.
-John Straw
Introduction
More and more companies are now aware of the power of data. Machine Learning models are
increasing in popularity and are now being used to solve a wide variety of business problems using
data. Having said that, it is also true that there is always a trade-off between accuracy of models &
its interpretability.
In general, if accuracy has to be improved, data scientists have to resort to using complicated
algorithms like Bagging, Boosting, Random Forests etc. which are “Blackbox” methods. It is no
wonder that many of the winning entries in Kaggle or Analytics Vidhya competitions tend to use
algorithms like XGBoost, where there is no requirement to explain the process of generating the
predictions to a business user. On the other hand, in a business setting, simpler models that are
more interpretable like Linear Regression, Logistic Regression, Decision Trees, etc. are used even if
the predictions are less accurate.
This situation has got to change – the trade-off between accuracy & interpretability is not
acceptable. We need to find ways to use powerful black-box algorithms even in a business setting
and still be able to explain the logic behind the predictions intuitively to a business user. With
increased trust in predictions, organisations will deploy machine learning models more extensively
within the enterprise. The question is – “How do we build trust in Machine Learning Models”?
Table of Contents
1. Motivation
2. The Problem
2.1 Steps for model building
2.2 LIME steps to make your models more interpretable
3. End Notes
1. Motivation
It is in this context, I find the paper titled “Why Should I Trust You?”- Explaining the
Predictions of Any Classifier [1] intriguing & interesting. In this paper [1] , the authors explain a
framework called LIME (Locally Interpretable Model-Agnostic Explanations), which is an
algorithm that can explain the predictions of any classifier or regressor in a faithful way, by
approximating it locally with an interpretable model. In the paper, there are many examples of
problems where the predictions from black-box algorithms (even as extreme as Deep Learning) can
be formulated in an interpretable fashion. I am not going to explain the paper in this blog but rather
show how it can be implemented in our own classification problems.
2. The Problem
Sigma Cab’s Surge Pricing Type Classification
In February this year, Analytics Vidhya ran a machine learning competition in which the objective
was to predict the “Surge Pricing Type” for Sigma Cab – a taxi aggregation service. This was a
multiclass classification problem. In this blog, we will see how to make predictions for this dataset
and use LIME to make the predictions interpretable. The intent here is not to build the best possible
model but rather the focus is on the aspect of interpretability.
LIME Step 2 – Create a lambda function for each classifier that will return the predicted
probability for the target variable (surge pricing type) given the set of features
LIME Step 3 – Create a concatenated list of all feature names which will be utilised by the LIME
explainer in subsequent steps
LIME Step 4 – This is the ‘magical’ step that creates the explainer
class DataFrameImputer(TransformerMixin):
def __init__(self):
"""Impute missing values.
Columns of dtype object are imputed with the most frequent value
in column.
Columns of other types are imputed with mean of column.
"""
def fit(self, X, y=None):
self.fill = pd.Series([X[c].value_counts().index[0]
if X[c].dtype == np.dtype('O') else X[c].mean() for c in X],
index=X.columns)
return self
train_df['source']='train'
test_df['source']='test'
data = pd.concat([train_df, test_df],ignore_index=True)
print (train_df.shape, test_df.shape, data.shape)
data["Life_Style_Index"]=imputer_mean.fit_transform(data[["Life_Style_Index"]]).
ravel()
data["Var1"]=imputer_mean.fit_transform(data[["Var1"]]).ravel()
data["Customer_Since_Months"]=imputer_median.fit_transform(data[["Customer_Since
_Months"]]).ravel()
X = pd.DataFrame(data)
data = DataFrameImputer().fit_transform(X)
print (data.apply(num_missing, axis=0))
# Extract features
float_columns=[]
cat_columns=[]
int_columns=[]
for i in train_df.columns:
if train_df[i].dtype == 'float' :
float_columns.append(i)
elif train_df[i].dtype == 'int64':
int_columns.append(i)
elif train_df[i].dtype == 'object':
cat_columns.append(i)
train_cat_features = train_df[cat_columns]
train_float_features = train_df[float_columns]
train_int_features = train_df[int_columns]
for i in train_float_features.columns:
X_temp = train_float_features[i].reshape(-1,1)
train_float_features[i] = scaler.fit_transform(X_temp)
array = train_transformed_features.values
number_of_features = len(array[0])
X = array[:,0:number_of_features]
Y = train_target
scoring = 'accuracy'
model_logreg = LogisticRegression()
model_logreg.fit(X, Y)
model_rf = RandomForestClassifier()
model_rf.fit(X, Y)
model_xgb = XGBClassifier()
model_xgb.fit(X, Y)
# LIME SECTION
import sklearn
import sklearn.datasets
import sklearn.ensemble
import numpy as np
import lime
import lime.lime_tabular
from __future__ import print_function
categorical_features=cat_columns,
categorical_names=feature_names_cat, kernel_width=3)
# Pick the observation in the validation set for which explanation is required
observation_1 = 2
# Pick the observation in the validation set for which explanation is required
observation_2 = 45
3. End Notes
I hope you are as excited as me after looking at these results. The output of LIME provides an
intuition into the inner workings of machine learning algorithms as to the features that are being
used to arrive at a prediction. If LIME or similar algorithms can help in providing interpretable
output for any type of blackbox algorithm, it will go a long way in getting the buy-in from business
users to trust the output of machine learning algorithms. By building such trust, powerful methods
can be deployed in a business context achieving the twin benefits of higher accuracy and
interpretability. Please do check out the LIME paper for the math behind this fascinating
development.