• Stars
    star
    1,529
  • Rank 29,549 (Top 0.6 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 5 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Interpretability and explainability of data and machine learning models

AI Explainability 360 (v0.2.1)

Build Documentation Status PyPI version

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. In addition to tabular, text and images, AIX360 is now expanded to support time series modality as well.

The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some guidance material and a chart that can be consulted.

We have developed the package with extensibility in mind. This library is still in development. We encourage you to contribute your explainability algorithms, metrics, and use cases. To get started as a contributor, please join the AI Explainability 360 Community on Slack by requesting an invitation here. Please review the instructions to contribute code and python notebooks here.

Supported explainability algorithms

Data explanation

Local post-hoc explanation

Time-Series local post-hoc explanation

  • Time Series Saliency Maps using Integrated Gradients (Inspired by Sundararajan et al. )
  • Time Series LIME (Time series adaptation of the classic paper by Ribeiro et al. 2016 )
  • Time Series Individual Conditional Expectation (Time series adaptation of Individual Conditional Expectation Plots Goldstein et al. )

Local direct explanation

Global direct explanation

Global post-hoc explanation 

Supported explainability metrics

Setup

Supported Configurations:

Explainer OS Python version
cofrnet macOS, Ubuntu, Windows 3.10
contrastive macOS, Ubuntu, Windows 3.6
dipvae macOS, Ubuntu, Windows 3.10
gce macOS, Ubuntu, Windows 3.10
imd macOS, Ubuntu 3.10
lime macOS, Ubuntu, Windows 3.10
matching macOS, Ubuntu, Windows 3.10
nncontrastive macOS, Ubuntu, Windows 3.10
profwt macOS, Ubuntu, Windows 3.6
protodash macOS, Ubuntu, Windows 3.10
rbm macOS, Ubuntu, Windows 3.10
rule_induction macOS, Ubuntu, Windows 3.10
shap macOS, Ubuntu, Windows 3.6
ted macOS, Ubuntu, Windows 3.10
tsice macOS, Ubuntu, Windows 3.10
tslime macOS, Ubuntu, Windows 3.10
tssaliency macOS, Ubuntu, Windows 3.10

(Optional) Create a virtual environment

AI Explainability 360 requires specific versions of many Python packages which may conflict with other projects on your system. A virtual environment manager is strongly recommended to ensure dependencies may be installed safely. If you have trouble installing the toolkit, try this first.

Conda

Conda is recommended for all configurations though Virtualenv is generally interchangeable for our purposes. Miniconda is sufficient (see the difference between Anaconda and Miniconda if you are curious) and can be installed from here if you do not already have it.

Then, to create a new Python 3.10(or any of the supported python versions) environment, run:

conda create --name aix360 python=3.10
conda activate aix360

The shell should now look like (aix360) $. To deactivate the environment, run:

(aix360)$ conda deactivate

The prompt will return back to $ or (base)$.

Note: Older versions of conda may use source activate aix360 and source deactivate (activate aix360 and deactivate on Windows).

Installation

Clone the latest version of this repository:

(aix360)$ git clone https://github.com/Trusted-AI/AIX360

If you'd like to run the examples and tutorial notebooks, download the datasets now and place them in their respective folders as described in aix360/data/README.md.

Then, navigate to the root directory of the project which contains setup.py file and run:

(aix360)$ pip install -e .

If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.

With the new setup.py, pip install . installs default dependencies only. To install dependencies of required algorithms, use pip install .[algo1, algo2]. An example is pip install .[dipvae,cofrnet,tsice].

(aix360)$ pip install --upgrade pip setuptools
(aix360)$ pip uninstall aix360

Running in Docker

  • Under AIX360 directory build the container image from Dockerfile using docker build -t aix360_docker .
  • Start the container image using command docker run -it -p 8888:8888 aix360_docker:latest bash assuming port 8888 is free on your machine.
  • Inside the container start jupuyter lab using command jupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browser
  • Access the sample tutorials on your machine using URL localhost:8888

PIP Installation of AI Explainability 360

If you would like to quickly start using the AI explainability 360 toolkit without cloning this repository, then you can install the aix360 pypi package as follows.

(your environment)$ pip install aix360

If you follow this approach, you may need to download the notebooks in the examples folder separately.

Using AI Explainability 360

The examples directory contains a diverse collection of jupyter notebooks that use AI Explainability 360 in various ways. Both examples and tutorial notebooks illustrate working code using the toolkit. Tutorials provide additional discussion that walks the user through the various steps of the notebook. See the details about tutorials and examples here.

Citing AI Explainability 360

If you are using AI Explainability 360 for your work, we encourage you to

  • Cite the following paper. The bibtex entry is as follows:
@misc{aix360-sept-2019,
title = "One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques",
author = {Vijay Arya and Rachel K. E. Bellamy and Pin-Yu Chen and Amit Dhurandhar and Michael Hind
and Samuel C. Hoffman and Stephanie Houde and Q. Vera Liao and Ronny Luss and Aleksandra Mojsilovi\'c
and Sami Mourad and Pablo Pedemonte and Ramya Raghavendra and John Richards and Prasanna Sattigeri
and Karthikeyan Shanmugam and Moninder Singh and Kush R. Varshney and Dennis Wei and Yunfeng Zhang},
month = sept,
year = {2019},
url = {https://arxiv.org/abs/1909.03012}
}

AIX360 Videos

  • Introductory video to AI Explainability 360 by Vijay Arya and Amit Dhurandhar, September 5, 2019 (35 mins)

Acknowledgements

AIX360 is built with the help of several open source packages. All of these are listed in setup.py and some of these include:

License Information

Please view both the LICENSE file and the folder supplementary license present in the root directory for license information.