The shapper
is an R package which ports the
shap
python library in R. For details and examples see shapper repository on
github and shapper website.
SHAP (SHapley Additive exPlanations) is a method to explain predictions of any machine learning model. For more details about this method see shap repository on github.
The example usage is presented on the HR
dataset from
the R package DALEX
. For more details see DALEX github
repository.
First step is to create an explainer for each model. The explainer is an object that wraps up a model and meta-data.
library(shapper)
p_function <- function(model, data) predict(model, newdata = data, type = "prob")
ive_rf <- individual_variable_effect(model_rf, data = x_train, predict_function = p_function,
new_observation = x_train[1:2,], nsamples = 50)
ive_tree <- individual_variable_effect(model_tree, data = x_train, predict_function = p_function,
new_observation = x_train[1:2,], nsamples = 50)
To see only attributions use option
show_predicted = FALSE
.
We can show many models on one grid.