We address the problem of the insuficient interpretability of
explanations for domain experts. We solve this issue by introducing
describe()
function, which automatically generates natural
language descriptions of explanations generated with
iBrakDown
package.
The iBreakDown
package allows for generating feature
attribution explanations. Feature attribution explanations justify a
model’s prediction by showing which of the model’s variables affect the
prediction and to what extent. It is done by attaching to each variable
an importance coefficient, which sum should approximate the model’s
prediction.
There are two methods used by iBreakDown
package:
shap()
and break_down()
.
Function shap()
generates a SHAP explanation, that is,
the function assigns Shapley values to each variable. Function
break_down
uses break_down algorithm to generate an
efficient approximation of the Shapley values. We show how to generate
both explanations on an easy example using titanic data set and
explainers from DALEX
package.
First, we load the data set and build a random forest model
classifying which of the passengers survived the sinking of the Titanic.
Then, using DALEX
package, we generate an explainer of the
model. Lastly, we select a random passenger, which prediction’s should
be explained.
library("DALEX")
library("iBreakDown")
library("randomForest")
titanic <- na.omit(titanic)
model_titanic_rf <- randomForest(survived == "yes" ~ .,
data = titanic
)
explain_titanic_rf <- explain(model_titanic_rf,
data = titanic[,-9],
y = titanic$survived == "yes",
label = "Random Forest")
#> Preparation of a new explainer is initiated
#> -> model label : Random Forest
#> -> data : 2099 rows 8 cols
#> -> target variable : 2099 values
#> -> predict function : yhat.randomForest will be used ( default )
#> -> predicted values : No value for predict function target column. ( default )
#> -> model_info : package randomForest , ver. 4.7.1.2 , task regression ( default )
#> -> model_info : Model info detected regression task but 'y' is a logical . ( WARNING )
#> -> model_info : By deafult regressions tasks supports only numercical 'y' parameter.
#> -> model_info : Consider changing to numerical vector.
#> -> model_info : Otherwise I will not be able to calculate residuals or loss function.
#> -> predicted values : numerical, min = 0.005629431 , mean = 0.3237434 , max = 0.9935881
#> -> residual function : difference between y and yhat ( default )
#> -> residuals : numerical, min = -0.805686 , mean = 0.0006968475 , max = 0.9076636
#> A new explainer has been created!
#> gender age class embarked country fare sibsp parch
#> 607 male 29 3rd Southampton Norway 8.0203 0 0
Now we are ready for generating shap()
and
iBreakDown()
explanations.
The displayed explanations, despite their visual clarity, may not be interpretable for someone not familiar with iBreakDown or shap explanation. Therefore, we generate a simple natural language description for both explainers.
#> Random Forest predicts, that the prediction for the selected instance is 0.35 which is close to the average model prediction.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable that decrease the prediction is gender.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.047.
#> Random Forest predicts, that the prediction for the selected instance is 0.35 which is close to the average model prediction.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable that increase the prediction is country.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.042.
Natural language descriptions should be flexible enough to generate a description with the desired level of specificity and length. We describe the parameters used for describing both explanations. As both explanations have the same parameters, we turn our attention to describe the iBreakDown explanation.
The nonsignificance treshold controls which predictions are close to the average prediction. By setting a higher value, more predictions will be described as close to the average model prediction, and more variables will be described as nonsignificant.
#> Random Forest predicts, that the prediction for the selected instance is 0.35 which is close to the average model prediction.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable that decrease the prediction is gender.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.047.
The label of the prediction could be changed to display more specific descriptions.
#> Random Forest predicts, that the passanger survived with probability 0.35 which is close to the average model prediction.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable that decrease the prediction is gender.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.047.
Generating short descriptions can be useful, as they can make nice plot subtitles.
#> Random Forest predicts, that the prediction for the selected instance is 0.35.
Displaying variable values can easily make the description more informative.
#> Random Forest predicts, that the prediction for the selected instance is 0.35 which is close to the average model prediction.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable that decrease the prediction is gender (= male).
#>
#> Other variables are with less importance. The contribution of all other variables is -0.047.
Displaying numbers changes the whole argumentation style making the description longer.
#> Random Forest predicts, that the prediction for the selected instance is 0.35 which is close to the average model prediction 0.324.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable is gender. It decreases the prediction by 0.1.
#> The second most important variable is age. It increases the prediction by 0.096.
#> The third most important variable is country. It increases the prediction by 0.077.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.047.
Describing distribution details is useful if we want to have a big picture about other instances’ behaviour.
#> Random Forest predicts, that the prediction for the selected instance is 0.35 which is close to the average model prediction.Model predictions range from 0.006 to 0.994. The distribution of Random Forest's predictions is right-skewed with average equal to 0.324 and median equal to 0.215. The standard deviation is 0.281. Model's prediction for the selected instance is in the third quartile.
#>
#> The most important variable that decrease the prediction is gender.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.047.
Explanations generated by shap()
functions have the same
arguments expected from display_shap
, what add an
additional information, if the calculated variable’s contributions have
high or low variability.
#> Random Forest predicts, that the prediction for the selected instance is 0.35 which is close to the average model prediction.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable that increase the prediction is country.
#> The average contribution of all the above variables is significant.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.042.
Of course, all the arguments can be set according to preferences allowing for flexible natural language descriptions.
describe(shap_rf,
label = "the passanger survived with probability",
display_values = TRUE,
display_numbers = TRUE,
display_shap = TRUE)
#> Random Forest predicts, that the passanger survived with probability 0.35 which is close to the average model prediction 0.324.For the selected instance model's prediction is higher, than for 71% of all observations.
#>
#> The most important variable is country (= Norway). It increases the prediction by 0.09.
#> The second most important variable is gender (= male). It decreases the prediction by 0.065.
#> The third most important variable is age (= 29). It increases the prediction by 0.044.
#> The average contribution of all the above variables is significant.
#>
#> Other variables are with less importance. The contribution of all other variables is -0.042.