• Stars
    star
    20
  • Rank 1,115,742 (Top 23 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created 11 months ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This repository implements time series diffusion in the frequency domain.

More Repositories

1

Dynamask

This repository contains the implementation of Dynamask, a method to identify the features that are salient for a model to issue its prediction when the data is represented in terms of time series. For more details on the theoretical side, please read our ICML 2021 paper: 'Explaining Time Series Predictions with Dynamic Masks'.
Python
74
star
2

Simplex

This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help of a corpus of examples. For more details, please read our NeurIPS 2021 paper: 'Explaining Latent Representations with a Corpus of Examples'.
Python
22
star
3

Label-Free-XAI

This repository contains the implementation of Label-Free XAI, a new framework to adapt explanation methods to unsupervised models. For more details, please read our ICML 2022 paper: 'Label-Free Explainability for Unsupervised Models'.
Python
22
star
4

Symbolic-Pursuit

Github for the NIPS 2020 paper "Learning outside the black-box: at the pursuit of interpretable models"
Jupyter Notebook
15
star
5

CARs

This repository contains the implementation of Concept Activation Regions, a new framework to explain deep neural networks with human concepts. For more details, please read our NeurIPS 2022 paper: 'Concept Activation Regions: a Generalized Framework for Concept-Based Explanations.
Python
9
star
6

RobustXAI

This repository contains the implementation of the explanation invariance and equivariance metrics, a framework to evaluate the robustness of interpretability methods.
Jupyter Notebook
8
star
7

ITErpretability

This repository contains the implementation of ITErpretability, a new framework to benchmark treatment effect deep neural network estimators with interpretability. For more details, please read our NeurIPS 2022 paper: 'Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability'.
Jupyter Notebook
2
star
8

Projet-Ray-Tracing

MATLAB
1
star