• Stars
    star
    58
  • Rank 497,656 (Top 11 %)
  • Language
    R
  • Created over 5 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An R wrapper of SHAP python library

shapper

CRAN_Status_Badge Total Downloads Build Status Coverage Status Binder

An R wrapper of SHAP python library

Pkgdown Website

Blog post with gentle introduction to shapper

Installation and configuration

Install shapper R package

devtools::install_github("ModelOriented/shapper")

You can install shap Python library via

shapper::install_shap()

If installation didn't work for some reason. Try installing dependencies first

reticulate::py_install(c("numpy", "pandas"))

or

reticulate::conda_install(c("numpy", "pandas"))

Python library SHAP can be also installed from PyPI

pip install shap

or conda-forge

conda install -c conda-forge shap

For more details how to configure python paths and environments for R see reticulate.

Classification Example

# instal shapper
# devtools::install_github("ModelOriented/shapper")

# install shap python library
# shapper::install_shap()

# load datasets
# devtools::install_github("ModelOriented/DALEX2")
library("DALEX2")
Y_train <- HR$status
x_train <- HR[ , -6]

# Let's build models
library("randomForest")
set.seed(123)
model_rf <- randomForest(x = x_train, y = Y_train)

# here shapper starts
# load shapper
library(shapper)
p_function <- function(model, data) predict(model, newdata = data, type = "prob")

ive_rf <- individual_variable_effect(model_rf, data = x_train, predict_function = p_function,
            new_observation = x_train[1:2,], nsamples = 50)

# plot
plot(ive_rf)

# filtered
ive_rf_filtered <- dplyr::filter(ive_rf, `_ylevel_` =="fired")
shapper:::plot.individual_variable_effect(ive_rf_filtered)

Regression example

library(shapper)

library("DALEX2")
library("randomForest")

Y_train <- dragons$life_length
x_train <- dragons[ , -8]

set.seed(123)
model_rf <- randomForest(x = x_train, y = Y_train)

ive_rf <- individual_variable_effect(model_rf, data = x_train,
                                     new_observation = x_train[1,])

plot(ive_rf)

More Repositories

1

DALEX

moDel Agnostic Language for Exploration and eXplanation
Python
1,318
star
2

DrWhy

DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.
R
670
star
3

modelStudio

📍 Interactive Studio for Explanatory Model Analysis
R
318
star
4

randomForestExplainer

A set of tools to understand what is happening inside a Random Forest
R
226
star
5

modelDown

modelDown generates a website with HTML summaries for predictive models
R
118
star
6

forester

Trees are all you need
HTML
107
star
7

survex

Explainable Machine Learning in Survival Analysis
R
86
star
8

fairmodels

Flexible tool for bias detection, visualization, and mitigation
R
82
star
9

iBreakDown

Break Down with interactions for local explanations (SHAP, BreakDown, iBreakDown)
R
79
star
10

treeshap

Compute SHAP values for your tree-based models using the TreeSHAP algorithm
R
75
star
11

shapviz

R package for SHAP plots
R
63
star
12

DALEXtra

Extensions for the DALEX package
R
62
star
13

auditor

Model verification, validation, and error analysis
R
58
star
14

ingredients

Effects and Importances of Model Ingredients
R
37
star
15

live

Local Interpretable (Model-agnostic) Visual Explanations - model visualization for regression problems and tabular data based on LIME method. Available on CRAN
R
34
star
16

SAFE

Surrogate Assisted Feature Extraction
Python
33
star
17

DALEX-docs

Documentation for the DALEX project
Jupyter Notebook
33
star
18

kernelshap

Efficient R implementation of SHAP
R
30
star
19

ArenaR

Data generator for Arena - interactive XAI dashboard
R
29
star
20

rSAFE

Surrogate Assisted Feature Extraction in R
R
28
star
21

EIX

Structure mining for xgboost model
R
25
star
22

factorMerger

Set of tools to support results from post hoc testing
R
24
star
23

EMMA

Evaluation of Methods for dealing with Missing data in Machine Learning algorithms
HTML
23
star
24

xspliner

Explain black box with GLM
R
23
star
25

EloML

R package EloML: Elo rating system for machine learning models
R
23
star
26

Arena

Interactive XAI dashboard
Vue
22
star
27

MAIR

Monitoring of AI Regulations
HTML
19
star
28

pyCeterisParibus

Python library for Ceteris Paribus Plots (What-if plots)
Python
19
star
29

xai2shiny

Create Shiny application with model exploration from explainers
R
19
star
30

drifter

Concept Drift and Concept Shift Detection for Predictive Models
R
18
star
31

localModel

LIME-like explanations with interpretable features based on Ceteris Paribus curves. Now on CRAN.
R
14
star
32

vivo

Variable importance via oscillations
R
14
star
33

corrgrapher

Visualize correlations between variables
R
13
star
34

metaMIMIC

Jupyter Notebook
10
star
35

EvidenceBasedML

Evidence-Based Machine Learning
9
star
36

weles

Python
9
star
37

triplot

Triplot: Instance- and data-level explanations for the groups of correlated features.
R
9
star
38

xai2cloud

Create web API from model explainers
R
8
star
39

xaibot

XAI chat bot for Titanic model - created with plumber
JavaScript
7
star
40

FairPAN

R
6
star
41

AI-strategies-papers-regulations-monitoring

Monitoring of AI strategies, papers, and regulations
Jupyter Notebook
6
star
42

piBreakDown

python version of iBreakDown
Python
4
star
43

RME

Recurrent Memory Explainer
Python
3
star
44

mogger

Logger for Predictive Models
Java
2
star
45

ceterisParibus2

Very experimental version of the ceterisParibus package.
Jupyter Notebook
2
star
46

DrWhyTemplate

CSS
2
star
47

shimex

R Package for Exploring Models with Shiny App
R
2
star
48

DALEX2

Explain! Package with core wrappers for DrWhy universe.
R
2
star
49

ModelDevelopmentProcess

Source codes for Model Development Process plots
HTML
1
star
50

Hex4DrWhy

Shiny app for logo prototyping
R
1
star