--- title: "Model evaluation audit" author: "Alicja Gosiewska" date: "`r Sys.Date()`" output: html_document vignette: > %\VignetteEngine{knitr::knitr} %\VignetteIndexEntry{Model evaluation audit} %\usepackage[UTF-8]{inputenc} --- ```{r setup, echo = FALSE} knitr::opts_chunk$set(warning = FALSE) knitr::opts_chunk$set(message = FALSE) ``` In this vignette we present plots for evaluation of classification models. ## Data We work on `titanic_imputed` dataset form the `DALEX` package. ```{r} library(DALEX) head(titanic_imputed) ``` ## Models We fit 2 models: glm and randomForest. ```{r} model_glm <- glm(survived ~ ., data = titanic_imputed, family = "binomial") library(randomForest) model_rf <- randomForest(survived ~ ., data = titanic_imputed) ``` ## Preparation for evaluation analysis The first step is creating an object that can be used to audit a model. It wraps up a model with meta-data. An alternative way is to use `explain` function from the package `DALEX`. ```{r results = 'hide'} library(auditor) exp_glm <- audit(model_glm, data = titanic_imputed, y = titanic_imputed$survived) exp_rf <- audit(model_rf, data = titanic_imputed, y = titanic_imputed$survived) ``` Second step is creating `auditor_model_evaluation` object that can be further used for validating a model. ```{r} eva_glm <- model_evaluation(exp_glm) eva_rf <- model_evaluation(exp_rf) ``` ## Plots ### Receiver operating characteristic (ROC) Receiver operating characteristic (ROC) curve is a tool for visualizing a classifier’s performance. It answers the question of how well the model discriminates between the two classes. The boundary between classes is determined by a threshold value. ROC illustrates the performance of a classification model at various threshold settings. The diagonal line `y = x` corresponds to a classifier that randomly guess the positive class half the time. Any model that appears in the lower right part of plot performs worse than random guessing. The closer the curve is to the the left border and top border of plot, the more accurate the classifier is. ```{r} plot(eva_glm, eva_rf, type = "roc") # or # plot_roc(eva_glm, eva_rf) ``` ### LIFT chart The LIFT chart is a rate of positive prediction (*RPP*) plotted against true positive (*TP*) on a threshold *t*. The chart illustrates varying performance of the model for different thresholds. A random and ideal models are represented by dashed curves (lower and upper respectively). The closer the LIFT curve gets to the upper dashed curve (ideal model), the better a model is. ```{r} plot(eva_glm, eva_rf, type = "lift") # or # plot_lift(eva_glm, eva_rf) ``` ## Other methods Other methods and plots are described in vignettes: * [Model residuals audit](https://modeloriented.github.io/auditor/articles/model_residuals_audit.html) * [Model fit audit](https://modeloriented.github.io/auditor/articles/model_fit_audit.html) * [Model performance audit](https://modeloriented.github.io/auditor/articles/model_performance_audit.html) * [Observation influence audit](https://modeloriented.github.io/auditor/articles/observation_influence_audit.html)