• Stars
    star
    618
  • Rank 72,605 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created over 4 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

๐Ÿ‘‹ Xplique is a Neural Networks Explainability Toolbox
Xplique


๐ŸฆŠ Xplique (pronounced \ษ›ks.plik\) is a Python toolkit dedicated to explainability, currently based on Tensorflow. The goal of this library is to gather the state of the art of Explainable AI to help you understand your complex neural network models.
Explore Xplique docs ยป

Attributions ยท Concept ยท Feature Visualization ยท Metrics

The library is composed of several modules, the Attributions Methods module implements various methods (e.g Saliency, Grad-CAM, Integrated-Gradients...), with explanations, examples and links to official papers. The Feature Visualization module allows to see how neural networks build their understanding of images by finding inputs that maximize neurons, channels, layers or compositions of these elements. The Concepts module allows you to extract human concepts from a model and to test their usefulness with respect to a class. Finally, the Metrics module covers the current metrics used in explainability. Used in conjunction with the Attribution Methods module, it allows you to test the different methods or evaluate the explanations of a model.


๐Ÿ“š Table of contents

๐Ÿ”ฅ Tutorials

We propose some Hands-on tutorials to get familiar with the library and its api:

You can find a certain number of other practical tutorials just here. This section is actively developed and more contents will be included. We will try to cover all the possible usage of the library, feel free to contact us if you have any suggestions or recommandations towards tutorials you would like to see.

๐Ÿš€ Quick Start

Xplique requires a version of python higher than 3.6 and several libraries including Tensorflow and Numpy. Installation can be done using Pypi:

pip install xplique

Now that Xplique is installed, here are 4 basic examples of what you can do with the available modules.

Attributions Methods

let's start with a simple example, by computing Grad-CAM for several images (or a complete dataset) on a trained model.

from xplique.attributions import GradCAM

# load images, labels and model
# ...

explainer = GradCAM(model)
explanations = explainer.explain(images, labels)
# or just `explainer(images, labels)`

All attributions methods share a common API. You can find out more about it here.

Attributions Metrics

In order to measure if the explanations provided by our method are faithful (it reflects well the functioning of the model) we can use a fidelity metric such as Deletion

from xplique.attributions import GradCAM
from xplique.metrics import Deletion

# load images, labels and model
# ...

explainer = GradCAM(model)
explanations = explainer(inputs, labels)
metric = Deletion(model, inputs, labels)

score_grad_cam = metric(explanations)

All attributions metrics share a common API. You can find out more about it here

Concepts Extraction

Concerning the concept-based methods, we can for example extract a concept vector from a layer of a model. In order to do this, we use two datasets, one containing inputs containing the concept: positive_samples, the other containing other entries which do not contain the concept: negative_samples.

from xplique.concepts import Cav

# load a model, samples that contain a concept
# (positive) and samples who don't (negative)
# ...

extractor = Cav(model, 'mixed3')
concept_vector = extractor(positive_samples,
                           negative_samples)

More information on CAV here and on TCAV here.

Feature Visualization

Finally, in order to find an image that maximizes a neuron and at the same time a layer, we build two objectives that we combine together. We then call the optimizer which returns our images

from xplique.features_visualizations import Objective
from xplique.features_visualizations import optimize

# load a model...

neuron_obj = Objective.neuron(model, "logits", 200)
channel_obj = Objective.layer(model, "mixed3", 10)

obj = neuron_obj + 2.0 * channel_obj
images, obj_names = optimize(obj)

Want to know more ? Check the Feature Viz documentation

๐Ÿ“ฆ What's Included

All the attributions method presented below handle both Classification and Regression tasks.

Attribution Method Type of Model Source Tabular Data Images Time-Series Tutorial
Deconvolution TF Paper โœ” โœ” WIP Open In Colab
Grad-CAM TF Paper โœ” Open In Colab
Grad-CAM++ TF Paper โœ” Open In Colab
Gradient Input TF Paper โœ” โœ” WIP Open In Colab
Guided Backprop TF Paper โœ” โœ” WIP Open In Colab
Integrated Gradients TF Paper โœ” โœ” WIP Open In Colab
Kernel SHAP Callable* Paper โœ” โœ” WIP Open In Colab
Lime Callable* Paper โœ” โœ” WIP Open In Colab
Occlusion Callable* Paper โœ” โœ” WIP Open In Colab
Rise Callable* Paper WIP โœ” Open In Colab
Saliency TF Paper โœ” โœ” WIP Open In Colab
SmoothGrad TF Paper โœ” โœ” WIP Open In Colab
SquareGrad TF Paper โœ” โœ” WIP Open In Colab
VarGrad TF Paper โœ” โœ” WIP Open In Colab
Sobol Attribution TF Paper โœ” Open In Colab
Hsic Attribution TF Paper โœ” Open In Colab
Attribution Metrics Type of Model Property Source
MuFidelity TF Fidelity Paper
Deletion TF Fidelity Paper
Insertion TF Fidelity Paper
Average Stability TF Stability Paper
MeGe TF Representativity Paper
ReCo TF Consistency Paper
(WIP) e-robustness
Concepts method Type of Model Source
Concept Activation Vector (CAV) TF Paper
Testing CAV (TCAV) TF Paper
(WIP) Robust TCAV
(WIP) Automatic Concept Extraction (ACE)
Feature Visualization (Paper) Type of Model Details
Neurons TF Optimizes for specific neurons
Layer TF Optimizes for specific layers
Channel TF Optimizes for specific channels
Direction TF Optimizes for specific vector
Fourrier Preconditioning TF Optimize in Fourier basis (see preconditioning)
Objective combination TF Allows to combine objectives
methods with TF need a Tensorflow model.

๐Ÿ‘ Contributing

Feel free to propose your ideas or come and contribute with us on the Xplique toolbox! We have a specific document where we describe in a simple way how to make your first pull request: just here.

๐Ÿ‘€ See Also

This library is one approach of many to explain your model. We don't expect it to be the perfect solution; we create it to explore one point in the space of possibilities.

Other tools to explain your model include:

  • Lucid the wonderful library specialized in feature visualization from OpenAI.
  • Captum the Pytorch library for Interpretability research
  • Tf-explain that implement multiples attribution methods and propose callbacks API for tensorflow.
  • Alibi Explain for model inspection and interpretation
  • SHAP a very popular library to compute local explanations using the classic Shapley values from game theory and their related extensions

To learn more about Explainable AI in general, see:

๐Ÿ™ Acknowledgments

This project received funding from the French โ€Investing for the Future โ€“ PIA3โ€ program within the Artificial and Natural Intelligence Toulouse Institute (ANITI). The authors gratefully acknowledge the support of the DEEL project.

๐Ÿ‘จโ€๐ŸŽ“ Creators

This library was started as a side-project by Thomas FEL who is currently a graduate student at the Artificial and Natural Intelligence Toulouse Institute under the direction of Thomas SERRE. His thesis work focuses on explainability for deep neural networks.

He then received help from some members of the DEEL team to enhance the library namely from Lucas Hervier and Antonin Pochรฉ.

๐Ÿ—ž๏ธ Citation

If you use Xplique as part of your workflow in a scientific publication, please consider citing the ๐Ÿ—ž๏ธ Xplique official paper:

@article{fel2022xplique,
  title={Xplique: A Deep Learning Explainability Toolbox},
  author={Fel, Thomas and Hervier, Lucas and Vigouroux, David and Poche, Antonin and Plakoo, Justin and Cadene, Remi and Chalvidal, Mathieu and Colin, Julien and Boissin, Thibaut and Bethune, Louis and Picard, Agustin and Nicodeme, Claire 
          and Gardes, Laurent and Flandin, Gregory and Serre, Thomas},
  journal={Workshop on Explainable Artificial Intelligence for Computer Vision (CVPR)},
  year={2022}
}

๐Ÿ“ License

The package is released under MIT license.