DALEX - moDel Agnostic Language for Exploration and eXplanation
Any unverified black box model is the path to failure. Opaqueness leads to distrust. Distrust leads to ignoration. Ignoration leads to rejection. DALEX package xrays any model and helps to explore and explain its behaviour. Machine Learning (ML) models are widely used and have various applications in classification or regression. Models created with boosting, bagging, stacking or similar techniques are often used due to their high performance. But such black-box models usually lack direct interpretability. DALEX package contains various methods that help to understand the link between input variables and model output. Implemented methods help to explore the model on the level of a single instance as well as a level of the whole dataset. All model explainers are model agnostic and can be compared across different models. DALEX package is the cornerstone for 'DrWhy.AI' universe of packages for visual model exploration. Find more details in (Biecek 2018) <https://jmlr.org/papers/v19/18-416.html>.
Last updated 2 months ago
black-boxdalexdata-scienceexplainable-aiexplainable-artificial-intelligenceexplainable-mlexplanationsexplanatory-model-analysisfairnessimlinterpretabilityinterpretable-machine-learningmachine-learningmodel-visualizationpredictive-modelingresponsible-airesponsible-mlxai
13.21 score 1.4k stars 19 packages 828 scripts 5.2k downloadsrandomForestExplainer - Explaining and Visualizing Random Forests in Terms of Variable Importance
A set of tools to help explain which variables are most important in a random forests. Various variable importance measures are calculated and visualized in different settings in order to get an idea on how their importance changes depending on our criteria (Hemant Ishwaran and Udaya B. Kogalur and Eiran Z. Gorodeski and Andy J. Minn and Michael S. Lauer (2010) <doi:10.1198/jasa.2009.tm08622>, Leo Breiman (2001) <doi:10.1023/A:1010933404324>).
Last updated 8 months ago
random-forest
10.03 score 230 stars 254 scripts 1.3k downloadsiBreakDown - Model Agnostic Instance Level Variable Attributions
Model agnostic tool for decomposition of predictions from black boxes. Supports additive attributions and attributions with interactions. The Break Down Table shows contributions of every variable to a final prediction. The Break Down Plot presents variable contributions in a concise graphical way. This package works for classification and regression models. It is an extension of the 'breakDown' package (Staniak and Biecek 2018) <doi:10.32614/RJ-2018-072>, with new and faster strategies for orderings. It supports interactions in explanations and has interactive visuals (implemented with 'D3.js' library). The methodology behind is described in the 'iBreakDown' article (Gosiewska and Biecek 2019) <arXiv:1903.11420> This package is a part of the 'DrWhy.AI' universe (Biecek 2018) <arXiv:1806.08915>.
Last updated 12 months ago
breakdownimlinterpretabilityshapleyxai
9.85 score 80 stars 21 packages 41 scripts 4.3k downloadsshapviz - SHAP Visualizations
Visualizations for SHAP (SHapley Additive exPlanations), such as waterfall plots, force plots, various types of importance plots, dependence plots, and interaction plots. These plots act on a 'shapviz' object created from a matrix of SHAP values and a corresponding feature dataset. Wrappers for the R packages 'xgboost', 'lightgbm', 'fastshap', 'shapr', 'h2o', 'treeshap', 'DALEX', and 'kernelshap' are added for convenience. By separating visualization and computation, it is possible to display factor variables in graphs, even if the SHAP values are calculated by a model that requires numerical features. The plots are inspired by those provided by the 'shap' package in Python, but there is no dependency on it.
Last updated 7 days ago
explainable-aimachine-learningshapshapley-valuevisualizationxai
9.84 score 78 stars 205 scripts 3.4k downloadssurvex - Explainable Machine Learning in Survival Analysis
Survival analysis models are commonly used in medicine and other areas. Many of them are too complex to be interpreted by human. Exploration and explanation is needed, but standard methods do not give a broad enough picture. 'survex' provides easy-to-apply methods for explaining survival models, both complex black-boxes and simpler statistical models. They include methods specific to survival analysis such as SurvSHAP(t) introduced in Krzyzinski et al., (2023) <doi:10.1016/j.knosys.2022.110234>, SurvLIME described in Kovalev et al., (2020) <doi:10.1016/j.knosys.2020.106164> as well as extensions of existing ones described in Biecek et al., (2021) <doi:10.1201/9780429027192>.
Last updated 5 months ago
biostatisticsbrier-scorescensored-datacox-modelcox-regressionexplainable-aiexplainable-machine-learningexplainable-mlexplanatory-model-analysisinterpretable-machine-learninginterpretable-mlmachine-learningprobabilistic-machine-learningshapsurvival-analysistime-to-eventvariable-importancexai
8.91 score 103 stars 113 scripts 543 downloadsauditor - Model Audit - Verification, Validation, and Error Analysis
Provides an easy to use unified interface for creating validation plots for any model. The 'auditor' helps to avoid repetitive work consisting of writing code needed to create residual plots. This visualizations allow to asses and compare the goodness of fit, performance, and similarity of models.
Last updated 1 years ago
classificationerror-analysisexplainable-artificial-intelligencemachine-learningmodel-validationregression-modelsresidualsxai
8.77 score 58 stars 2 packages 93 scripts 1.0k downloadskernelshap - Kernel SHAP
Efficient implementation of Kernel SHAP, see Lundberg and Lee (2017), and Covert and Lee (2021) <http://proceedings.mlr.press/v130/covert21a>. Furthermore, for up to 14 features, exact permutation SHAP values can be calculated. The package plays well together with meta-learning packages like 'tidymodels', 'caret' or 'mlr3'. Visualizations can be done using the R package 'shapviz'.
Last updated 3 months ago
explainable-aiinterpretabilityinterpretable-machine-learningmachine-learningshapxai
8.27 score 39 stars 16 packages 110 scripts 1.1k downloadsmodelStudio - Interactive Studio for Explanatory Model Analysis
Automate the explanatory analysis of machine learning predictive models. Generate advanced interactive model explanations in the form of a serverless HTML site with only one line of code. This tool is model-agnostic, therefore compatible with most of the black-box predictive models and frameworks. The main function computes various (instance and model-level) explanations and produces a customisable dashboard, which consists of multiple panels for plots with their short descriptions. It is possible to easily save the dashboard and share it with others. 'modelStudio' facilitates the process of Interactive Explanatory Model Analysis introduced in Baniecki et al. (2023) <doi:10.1007/s10618-023-00924-w>.
Last updated 1 years ago
aiexplainableexplainable-aiexplainable-machine-learningexplanatory-model-analysishumanimlinteractiveinteractivityinterpretabilityinterpretableinterpretable-machine-learninglearningmachinemodelmodel-visualizationvisualizationxai
7.90 score 324 stars 54 scripts 456 downloadsDALEXtra - Extension for 'DALEX' Package
Provides wrapper of various machine learning models. In applied machine learning, there is a strong belief that we need to strike a balance between interpretability and accuracy. However, in field of the interpretable machine learning, there are more and more new ideas for explaining black-box models, that are implemented in 'R'. 'DALEXtra' creates 'DALEX' Biecek (2018) <arXiv:1806.08915> explainer for many type of models including those created using 'python' 'scikit-learn' and 'keras' libraries, and 'java' 'h2o' library. Important part of the package is Champion-Challenger analysis and innovative approach to model performance across subsets of test data presented in Funnel Plot.
Last updated 1 years ago
extension-for-dalex-package
7.77 score 66 stars 1 packages 394 scripts 2.5k downloadsfairmodels - Flexible Tool for Bias Detection, Visualization, and Mitigation
Measure fairness metrics in one place for many models. Check how big is model's bias towards different races, sex, nationalities etc. Use measures such as Statistical Parity, Equal odds to detect the discrimination against unprivileged groups. Visualize the bias using heatmap, radar plot, biplot, bar chart (and more!). There are various pre-processing and post-processing bias mitigation algorithms implemented. Package also supports calculating fairness metrics for regression models. Find more details in (Wiśniewski, Biecek (2021)) <arXiv:2104.00507>.
Last updated 2 years ago
explain-classifiersexplainable-mlfairnessfairness-comparisonfairness-mlmodel-evaluation
7.59 score 86 stars 1 packages 50 scripts 384 downloadsrSAFE - Surrogate-Assisted Feature Extraction
Provides a model agnostic tool for white-box model trained on features extracted from a black-box model. For more information see: Gosiewska et al. (2020) <doi:10.1016/j.dss.2021.113556>.
Last updated 2 years ago
feature-engineeringfeature-extractionimlinterpretabilitymachine-learningxai
6.79 score 28 stars 44 scripts 187 downloadstreeshap - Compute SHAP Values for Your Tree-Based Models Using the 'TreeSHAP' Algorithm
An efficient implementation of the 'TreeSHAP' algorithm introduced by Lundberg et al., (2020) <doi:10.1038/s42256-019-0138-9>. It is capable of calculating SHAP (SHapley Additive exPlanations) values for tree-based models in polynomial time. Currently supported models include 'gbm', 'randomForest', 'ranger', 'xgboost', 'lightgbm'.
Last updated 10 months ago
explainabilityexplainable-aiexplainable-artificial-intelligenceexplanatory-model-analysisimlinterpretabilityinterpretable-machine-learningmachine-learningresponsible-mlshapshapley-valuexai
6.63 score 78 stars 155 scripts 444 downloadslocalModel - LIME-Based Explanations with Interpretable Inputs Based on Ceteris Paribus Profiles
Local explanations of machine learning models describe, how features contributed to a single prediction. This package implements an explanation method based on LIME (Local Interpretable Model-agnostic Explanations, see Tulio Ribeiro, Singh, Guestrin (2016) <doi:10.1145/2939672.2939778>) in which interpretable inputs are created based on local rather than global behaviour of each original feature.
Last updated 3 years ago
6.13 score 14 stars 21 scripts 1.0k downloadsarenar - Arena for the Exploration and Comparison of any ML Models
Generates data for challenging machine learning models in 'Arena' <https://arena.drwhy.ai> - an interactive web application. You can start the server with XAI (Explainable Artificial Intelligence) plots to be generated on-demand or precalculate and auto-upload data file beside shareable 'Arena' URL.
Last updated 4 years ago
axplainable-artificial-intelligenceemaexplainabilityexplanatory-model-analysisimlinteractive-xaiinterpretabilityxai
5.94 score 31 stars 14 scripts 243 downloadshstats - Interaction Statistics
Fast, model-agnostic implementation of different H-statistics introduced by Jerome H. Friedman and Bogdan E. Popescu (2008) <doi:10.1214/07-AOAS148>. These statistics quantify interaction strength per feature, feature pair, and feature triple. The package supports multi-output predictions and can account for case weights. In addition, several variants of the original statistics are provided. The shape of the interactions can be explored through partial dependence plots or individual conditional expectation plots. 'DALEX' explainers, meta learners ('mlr3', 'tidymodels', 'caret') and most other models work out-of-the-box.
Last updated 3 months ago
interactioninterpretabilitymachine-learningrstatstatisticsxai
5.76 score 24 stars 34 scripts 329 downloadsEIX - Explain Interactions in 'XGBoost'
Structure mining from 'XGBoost' and 'LightGBM' models. Key functionalities of this package cover: visualisation of tree-based ensembles models, identification of interactions, measuring of variable importance, measuring of interaction importance, explanation of single prediction with break down plots (based on 'xgboostExplainer' and 'iBreakDown' packages). To download the 'LightGBM' use the following link: <https://github.com/Microsoft/LightGBM>. 'EIX' is a part of the 'DrWhy.AI' universe.
Last updated 4 years ago
5.70 score 25 stars 5 scripts 290 downloadslive - Local Interpretable (Model-Agnostic) Visual Explanations
Interpretability of complex machine learning models is a growing concern. This package helps to understand key factors that drive the decision made by complicated predictive model (so called black box model). This is achieved through local approximations that are either based on additive regression like model or CART like model that allows for higher interactions. The methodology is based on Tulio Ribeiro, Singh, Guestrin (2016) <doi:10.1145/2939672.2939778>. More details can be found in Staniak, Biecek (2018) <doi:10.32614/RJ-2018-072>.
Last updated 5 years ago
imlinterpretabilitylimemachine-learningmodel-visualizationvisual-explanationsxai
5.57 score 34 stars 55 scripts 311 downloadsvivo - Variable Importance via Oscillations
Provides an easy to calculate local variable importance measure based on Ceteris Paribus profile and global variable importance measure based on Partial Dependence Profiles.
Last updated 4 years ago
explainable-aiexplainable-artificial-intelligenceexplainable-mlimlinterpretable-machine-learningvariable-importancexai
5.45 score 14 stars 7 scripts 220 downloadsdrifter - Concept Drift and Concept Shift Detection for Predictive Models
Concept drift refers to the change in the data distribution or in the relationships between variables over time. 'drifter' calculates distances between variable distributions or variable relations and identifies both types of drift. Key functions are: calculate_covariate_drift() checks distance between corresponding variables in two datasets, calculate_residuals_drift() checks distance between residual distributions for two models, calculate_model_drift() checks distance between partial dependency profiles for two models, check_drift() executes all checks against drift. 'drifter' is a part of the 'DrWhy.AI' universe (Biecek 2018) <arXiv:1806.08915>.
Last updated 5 years ago
concept-driftconcept-shiftdrwhypredictive-modeling
4.45 score 19 stars 1 packages 5 scripts 191 downloadscorrgrapher - Explore Correlations Between Variables in a Machine Learning Model
When exploring data or models we often examine variables one by one. This analysis is incomplete if the relationship between these variables is not taken into account. The 'corrgrapher' package facilitates simultaneous exploration of the Partial Dependence Profiles and the correlation between variables in the model. The package 'corrgrapher' is a part of the 'DrWhy.AI' universe.
Last updated 4 years ago
explanatory-model-analysismodel-visualizationxai
3.81 score 13 stars 4 scripts 224 downloadstriplot - Explaining Correlated Features in Machine Learning Models
Tools for exploring effects of correlated features in predictive models. The predict_triplot() function delivers instance-level explanations that calculate the importance of the groups of explanatory variables. The model_triplot() function delivers data-level explanations. The generic plot function visualises in a concise way importance of hierarchical groups of predictors. All of the the tools are model agnostic, therefore works for any predictive machine learning models. Find more details in Biecek (2018) <arXiv:1806.08915>.
Last updated 4 years ago
explanationsexplanatory-model-analysismachine-learningmodel-visualizationxai
3.65 score 9 stars 7 scripts 150 downloads