• Stars
    star
    819
  • Rank 55,643 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 8 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Public facing deeplift repo

DeepLIFT: Deep Learning Important FeaTures

Build Status license

This version of DeepLIFT has been tested with Keras 2.2.4 & tensorflow 1.14.0. See this FAQ question for information on other implementations of DeepLIFT that may work with different versions of tensorflow/pytorch, as well as a wider range of architectures. See the tags for older versions.

This repository implements the methods in "Learning Important Features Through Propagating Activation Differences" by Shrikumar, Greenside & Kundaje, as well as other commonly-used methods such as gradients, gradient-times-input (equivalent to a version of Layerwise Relevance Propagation for ReLU networks), guided backprop and integrated gradients.

Here is a link to the slides and the video of the 15-minute talk given at ICML. Here is a link to a longer series of video tutorials. Please see the FAQ and file a github issue if you have questions.

Note: when running DeepLIFT for certain computer vision tasks you may get better results if you compute contribution scores of some higher convolutional layer rather than the input pixels. Use the argument find_scores_layer_idx to specify which layer to compute the scores for.

Please be aware that figuring out optimal references is still an open problem. Suggestions on good heuristics for different applications are welcome. In the meantime, feel free to look at this github issue for general ideas: #104

Please feel free to follow this repository to stay abreast of updates.

Table of contents

Installation

DeepLIFT is on pypi, so it can be installed using pip:

pip install deeplift

If you want to be able to make edits to the code, it is recommended that you clone the repository and install using the --editable flag.

git clone https://github.com/kundajelab/deeplift.git #will clone the deeplift repository
pip install --editable deeplift/ #install deeplift from the cloned repository. The "editable" flag means changes to the code will be picked up automatically.

While DeepLIFT does not require your models to be trained with any particular library, we have provided autoconversion functions to convert models trained using Keras into the DeepLIFT format. If you used a different library to train your models, you can still use DeepLIFT if you recreate the model using DeepLIFT layers.

This implementation of DeepLIFT was tested with tensorflow 1.7, and autoconversion was tested using keras 2.0.

Quickstart

These examples show how to autoconvert a keras model and obtain importance scores. Non-keras models can be converted to DeepLIFT if they are saved in the keras 2.0 format

#Convert a keras sequential model
import deeplift
from deeplift.conversion import kerasapi_conversion as kc
#NonlinearMxtsMode defines the method for computing importance scores.
#NonlinearMxtsMode.DeepLIFT_GenomicsDefault uses the RevealCancel rule on Dense layers
#and the Rescale rule on conv layers (see paper for rationale)
#Other supported values are:
#NonlinearMxtsMode.RevealCancel - DeepLIFT-RevealCancel at all layers (used for the MNIST example)
#NonlinearMxtsMode.Rescale - DeepLIFT-rescale at all layers
#NonlinearMxtsMode.Gradient - the 'multipliers' will be the same as the gradients
#NonlinearMxtsMode.GuidedBackprop - the 'multipliers' will be what you get from guided backprop
#Use deeplift.util.get_integrated_gradients_function to compute integrated gradients
#Feel free to email avanti [dot] [email protected] if anything is unclear
deeplift_model =\
    kc.convert_model_from_saved_files(
        saved_hdf5_file_path,
        nonlinear_mxts_mode=deeplift.layers.NonlinearMxtsMode.DeepLIFT_GenomicsDefault) 

#Specify the index of the layer to compute the importance scores of.
#In the example below, we find scores for the input layer, which is idx 0 in deeplift_model.get_layers()
find_scores_layer_idx = 0

#Compile the function that computes the contribution scores
#For sigmoid or softmax outputs, target_layer_idx should be -2 (the default). This computes explanations
# w.r.t. the logits. (See "3.6 Choice of target layer" in https://arxiv.org/abs/1704.02685 for justification)
#For regression tasks with a linear output, target_layer_idx should be -1 (which simply refers to the last layer)
#Note that in the case of softmax outputs, it may be a good idea to normalize the softmax logits so
# that they sum to zero across all tasks. This ensures that if a feature is contributing equally to
# to all the softmax logits, it will effectly be seen as contributing to none of the tasks (adding
# a constant to all logits of a softmax does not change the output). This is discussed in
# https://github.com/kundajelab/deeplift/issues/116. One way to efficiently acheive this
# normalization is to mean-normalize the weights going into the Softmax layer as
# discussed in eqn. 21 in Section 2.5 of https://arxiv.org/pdf/1605.01713.pdf ("A note on Softmax Activation")
#If you want the DeepLIFT multipliers instead of the contribution scores, you can use get_target_multipliers_func
deeplift_contribs_func = deeplift_model.get_target_contribs_func(
                            find_scores_layer_idx=find_scores_layer_idx,
                            target_layer_idx=-1)
#You can also provide an array of indices to find_scores_layer_idx to get scores for multiple layers at once

#compute scores on inputs
#input_data_list is a list containing the data for different input layers
#eg: for MNIST, there is one input layer with with dimensions 1 x 28 x 28
#In the example below, let X be an array with dimension n x 1 x 28 x 28 where n is the number of examples
#task_idx represents the index of the node in the output layer that we wish to compute scores.
#Eg: if the output is a 10-way softmax, and task_idx is 0, we will compute scores for the first softmax class
scores = np.array(deeplift_contribs_func(task_idx=0,
                                         input_data_list=[X],
                                         batch_size=10,
                                         progress_update=1000))

This will work for sequential models involving dense and/or conv1d/conv2d layers and linear/relu/sigmoid/softmax or prelu activations. Please create a github issue or email avanti [dot] [email protected] readme if you are interested in support for other layer types.

The syntax for using functional models is similar; you can use deeplift_model.get_name_to_layer().keys() to get a list of layer names when figuring out how to specify find_scores_layer_name and pre_activation_target_layer_name:

deeplift_model =\
    kc.convert_model_from_saved_files(
        saved_hdf5_file_path,
        nonlinear_mxts_mode=deeplift.layers.NonlinearMxtsMode.DeepLIFT_GenomicsDefault) 
#The syntax below for obtaining scores is similar to that of a converted graph model
#See deeplift_model.get_name_to_layer().keys() to see all the layer names
#As before, you can provide an array of names to find_scores_layer_name
#to get the scores for multiple layers at once
deeplift_contribs_func = deeplift_model.get_target_contribs_func(
    find_scores_layer_name="name_of_input_layer",
    pre_activation_target_layer_name="name_goes_here")

Examples

A notebook replicating the results in the paper on MNIST is at examples/mnist/MNIST_replicate_figures.ipynb, and a notebook demonstrating use on a genomics model with 1d convolutions is at examples/genomics/genomics_simulation.ipynb.

FAQ

Can you provide a brief intuition for how DeepLIFT works?

The 15-minute talk from ICML gives an intuition for the method. Here are links to the slides and the video (the video truncates the slides, which is why the slides are linked separately). Please file a github issue if you have questions.

My model architecture is not supported by this DeepLIFT implementation. What should I do?

My first suggestion would be to look at DeepSHAP/DeepExplainer (Lundberg & Lee), DeepExplain (Ancona et al.) or Captum (if you are using pytorch) to see if any of them satisfy your needs. They are implemented by overriding gradient operators and thus support a wider variety of architectures. However, none of these implementations support the RevealCancel rule (which deals with failure modes such as the min function). The pros and cons of DeepSHAP vs DeepExplain are discussed in more detail below. If you would really like to have the RevealCancel rule, go ahead and post a github issue, although my energies are currently focused on other projects and I may not be able to get to it for some time.

Note for people in genomics planning to use TF-MoDISco: for DeepSHAP, I have a custom branch of the DeepSHAP repository that has functionality for computing hypothetical importance scores. A colab notebook demonstrating the use of that repository is here, and a tutorial I made on DeepSHAP for genomics is here.

What are the similarities and differences between the DeepLIFT-like implementations in DeepExplain from Ancona et al. (ICLR 2018) and DeepSHAP/DeepExplainer from the SHAP repository?

Both DeepExplain (Ancona et al.) and DeepSHAP/DeepExplainer work by overriding gradient operators, and can thus support a wider variety of architectures than those that are covered in the DeepLIFT repo (in fact, the DeepSHAP/DeepExplainer implementation was inspired by Ancona et al.'s work and builds on a connection between DeepLIFT and SHAP, described in the SHAP paper). For the set of architectures described in the DeepLIFT paper, i.e. linear matrix multiplications, convolutions, and single-input nonlinearities (like ReLUs), both these implementations are identical to DeepLIFT with the Rescale rule. However, neither implementation supports DeepLIFT with the RevealCancel rule (a rule that was developed to deal with failure cases such as the min function, and which unfortunately is not easily implemented by overriding gradient operators). The key differences are as follows:

(1) DeepExplain uses standard gradient backpropagation for elementwise operations (such as those present in LSTMs/GRUs/Attention). This will likely violate the summation-to-delta property (i.e. the property that the sum of the attributions over the input is equal to the difference-from-reference of the output). If you have elementwise operations, I recommend you use DeepSHAP/DeepExplainer, which employs a summation-to-delta-preserving backprop rule. The same is technically true for Maxpooling operations when a non-uniform reference is used (though this has not been a salient problem for us in practice); the DeepSHAP/DeepExplainer implementation guarantees summation-to-delta is satisfied for Maxpooling by assigning credit/blame to either the neuron that is the max in the actual input or the neuron that was the max in the reference (this is different from the 'Max' attribution rule proposed in the SHAP paper; that attribution rule does not scale well).

(2) DeepExplain (by Ancona et al.) does not support the dynamic reference that is demonstrated in the DeepLIFT repo (i.e. the case where a different reference is generated according to the properties of the input example, such as the 'dinucleotide shuffled' references used in genomics). I've implemented the dynamic reference feature for DeepSHAP/DeepExplainer (click for a link to the PR). Also, if you are planning to use DeepSHAP for genomics with TF-MoDISco, please see the note above on my custom implementation of DeepSHAP for computing hypothetical importance scores + a link to the slides for a tutorial.

(3) DeepSHAP/DeepExplainer is implemented such that multiple references can be used for a single example, and the final attributions are averaged over each reference. However, the way this is implemented, each GPU batch calculates attributions for a single example, for all references. This means that the DeepSHAP/DeepExplainer implementation might be slow in cases where you have a large number of samples and only one reference. By contrast, DeepExplain (Ancona et al.) is structured such that the user provides a single reference, and this reference is used for all the examples. Thus, DeepExplain (Ancona et al.) allows GPU batching across examples, but does not allow for GPU batching across different references.

In summary, my recommendations are: use DeepSHAP if you have elementwise operations (e.g. GRUs/LSTMs/Attention), a need for dynamic references, or a large number of references compared to samples. Use DeepExplain when you have a large number of samples compared to references.

How does the implementation in this repository compare with the DeepLIFT implementation in Poerner et al. (ACL 2018)?

Poerner et al. conducted a series of benchmarks comparing DeepLIFT to other explanation methods on NLP tasks. Their implementation differs from the canonical DeepLIFT implementation in two main ways. First, they considered only the Rescale rule of DeepLIFT (according to the implementation here). Second, to handle operations that involve multiplications with gating units (which DeepLIFT was not designed for), they treat the gating neuron as a weight (similar to the approach in Arras et al.) and assign all importance to the non-gating neuron. Note that this differs from the implementation in DeepSHAP/DeepExplainer, which handles elementwise multiplications using a backprop rule base on SHAP and would assign importance to the gating neuron. We have not studied the appropriateness of Arras et al.'s approach, but the authors did find that "LIMSSE, LRP (Bach et al., 2015) and DeepLIFT (Shrikumar et al., 2017) are the most effective explanation methods (ยง4): LRP and DeepLIFT are the most consistent methods, while LIMSSE wins the hybrid document experiment." (They did not compare with the DeepSHAP/DeepExplainer implementation)

How does DeepLIFT compare to integrated gradients?

As illustrated in the DeepLIFT paper, the RevealCancel rule of DeepLIFT can allow DeepLIFT to properly handle cases where integrated gradients may give misleading results. Independent researchers have found that DeepLIFT with just the Rescale rule performs comparably to Integrated Gradients (they write: โ€œIntegrated Gradients and DeepLIFT have very high correlation, suggesting that the latter is a good (and faster) approximation of the former in practiceโ€). Their finding was consistent with our own personal experience. The speed improvement of DeepLIFT relative to Integrated Gradients becomes particularly useful when using a collection of references (since having a collection of references per example increases runtime).

Do you have support for non-keras models?

At the moment, we do not. However, if you are able to convert your model into the saved file format used by the Keras 2 API, then you can use this branch to load it into the DeepLIFT format. For inspiration on how to achieve this, you can look at examples/convert_models/keras1.2to2 for a notebook demonstrating how to convert models saved in the keras1.2 format to keras 2. DeepLIFT conversion works directly from keras saved files without ever actually loading the models into keras. If you have a pytorch model, you may also be interested in the Captum implementation.

What do negative scores mean?

A negative contribution score on an input means that the input contributed to moving the output below its reference value, where the reference value of the output is the value that it has when provided the reference input. A negative contribution does not mean that the input is "unimportant". If you want to find inputs that DeepLIFT considers "unimportant" (i.e. DeepLIFT thinks they don't influence the output of the model much), these would be the inputs that have contribution scores near 0.

How do I provide a reference argument?

Just as you supply input_data_list as an argument to the scoring function, you can also supply input_references_list. It would have the same dimensions as input_data_list, but would contain reference images for each input.

What should I use as my reference?

The choice of reference depends on the question you wish to ask of the data. Generally speaking, the reference should retain the properties you don't care about and scramble the properties you do care about. In the supplement of the DeepLIFT paper, Appendix L looks at the results on a CIFAR10 model with two different choices of the reference. You'll notice that when a blurred version of the input is used as a reference, the outlines of objects stand out. When a black reference is used, the results are more confusing, possibly because the net is also highlighting color. If you have a particular reference in mind, it is a good idea to check that the output of the model on that reference is consistent with what you expect. Another idea to consider is using multiple different references to interpret a single image and averaging the results over all the different references. We use this approach in genomics; we generate a collection of references per input sequence by shuffling the sequence (this is demonstrated in the genomics example notebook).

How can I get a sense of how much an input contributes across all examples?

It is fine to average the DeepLIFT contribution scores across examples. Be aware that there might be considerable heterogeneity in your data (i.e. some inputs may be very important for some subset of examples but not others, some inputs may contribute positively on some examples and negatively on others) so clustering may prove more insightful than averaging. For the purpose of feature selection, a reasonable heuristic would be to rank inputs in descending order of the average magnitude of the DeepLIFT contribution scores.

Can I have multiple input modes?

Yes. Rather than providing a single numpy array to input_data_list, provide a list of numpy arrays containing the input to each mode. You can also provide a dictionary to input_data_list where the key is the mode name and the value is the numpy array. Each numpy array should have the first axis be the sample axis.

Can I get the contribution scores on multiple input layers at once?

Also yes. Just provide a list to find_scores_layer_name rather than a single argument.

What's the license?

MIT License. While we had originally filed a patent on some of our interpretability work, we have since disbanded the patent as it appears this project has enough interest from the community to be best distributed in open-source format.

I have heard DeepLIFT can do pattern discovery - is that right?

You are likely thinking of TF-MoDISco. Here is a link to that code.

Contact

Please email avanti [dot] shrikumar [at] gmail.com with questions, ideas, feature requests, etc. If I don't respond, keep emailing me until I feel guilty and respond. Also feel free to email my adviser (anshul [at] kundaje [dot] net), who can further guilt me into responding. I promise I do actually want to respond; I'm just busy with other things because the incentive structure of academia doesn't reward maintenance of projects.

Under the hood

This section explains finer aspects of the deeplift implementation

Layers

The layer (deeplift.layers.core.Layer) is the basic unit. deeplift.layers.core.Dense and deeplift.layers.convolution.Conv2D are both examples of layers.

Layers implement the following key methods:

get_activation_vars()

Returns symbolic variables representing the activations of the layer. For an understanding of symbolic variables, refer to the documentation of symbolic computation packages like theano or tensorflow.

get_pos_mxts() and get_neg_mxts()

Returns symbolic variables representing the positive/negative multipliers on this layer (for the selected output). See paper for details.

get_target_contrib_vars()

Returns symbolic variables representing the importance scores. This is a convenience function that returns self.get_pos_mxts()*self._pos_contribs() + self.get_neg_mxts()*self._neg_contribs(). See paper for details.

The Forward Pass

Here are the steps necessary to implement a forward pass. If executed correctly, the results should be identical (within numerical precision) to a forward pass of your original model, so this is definitely worth doing as a sanity check. Note that if autoconversion (as described in the quickstart) is an option, you can skip steps (1) and (2).

  1. Create a layer object for every layer in the network
  2. Tell each layer what its inputs are via the set_inputs function. The argument to set_inputs depends on what the layer expects
  • If the layer has a single layer as its input (eg: Dense layers), then the argument is simply the layer that is the input
  • If the layer takes multiple layers as its input, the argument depends on the specific implementation - for example, in the case of a Concat layer, the argument is a list of layers
  1. Once every layer is linked to its inputs, you may compile the forward propagation function with deeplift.backend.function([input_layer.get_activation_vars()...], output_layer.get_activation_vars())
  • If you are working with a model produced by autoconversion, you can access individual layers via model.get_layers() for sequential models (where this function would return a list of layers) or model.get_name_to_layer() for Graph models (where this function would return a dictionary mapping layer names to layers)
  • The first argument is a list of symbolic tensors representing the inputs to the net. If the net has only one input layer, then this will be a list containing only one tensor
  • The second argument is the output of the function. In the example above, it is a single tensor, but it can also be a list of tensors if you want the outputs of more than one layer
  1. Once the function is compiled, you can use deeplift.util.run_function_in_batches(func, input_data_list) to run the function in batches (which would be advisable if you want to call the function on a large number of inputs that wont fit in memory)
  • func is simply the compiled function returned by deeplift.backend.function
  • input_data_list is a list of numpy arrays containing data for the different input layers of the network. In the case of a network with one input, this will be a list containing one numpy array
  • Optional arguments to run_function_in_batches are batch_size and progress_update

The Backward Pass

Here are the steps necessary to implement the backward pass, which is where the importance scores are calculated. Ideally, you should create a model through autoconversion (described in the quickstart) and then use model.get_target_contribs_func or model.get_target_multipliers_func. Howver, if that is not an option, read on (please also consider sending us a message to let us know, as if there is enough demand for a feature we will consider adding it). Note the instructions below assume you have done steps (1) and (2) under the forward pass section.

  1. For the layer(s) that you wish to compute the importance scores for, call reset_mxts_updated(). This resets the symbolic variables for computing the multipliers. If this is the first time you are compiling the backward pass, this step is not strictly necessary.

  2. For the output layer(s) containing the neuron(s) that the importance scores will be calculated with respect to, call set_scoring_mode(deeplift.layers.ScoringMode.OneAndZeros).

    • Briefly, this is the scoring mode that is used when we want to find scores with respect to a single target neuron. Other kinds of scoring modes may be added later (eg: differences between neurons).
    • A point of clarification: when we eventually compile the function, it will be a function which computes scores for only a single output neuron in a single layer every time it is called. The specific neuron and layer can be toggled later, at runtime. Right now, at this step, you should call set_scoring_mode on all the target layers that you might conceivably want to find the scores with respect to. This will save you from having to recompile the function to allow a different target layer later.
    • For Sigmoid/Softmax output layers, the output layer that you use should be the linear layer (usually a Dense layer) that comes before the final nonlinear activation. See "3.6 Choice of target layer" in the paper for justification. If there is no final nonlinearity (eg: in the case of many regression tasks), then the output layer should just be the last linear layer.
    • For Softmax outputs, you should may want to subtract the average contribution to all softmax classes as described in "Adjustments for softmax layers" in the paper (section 3.6). If your number of softmax classes is very large and you don't want to calculate contributions to each class separately for each example, contact me (avanti [dot] [email protected]) and I can implement a more efficient way to do the calculation (there is a way but I haven't coded it up yet).
  3. For the layer(s) that you wish to compute the importance scores for, call update_mxts(). This will create the symbolic variables that compute the multipliers with respect to the layer specified in step 2.

  4. Compile the importance score computation function with

    deeplift.backend.function([input_layer.get_activation_vars()...,
                               input_layer.get_reference_vars()...],
                              layer_to_find_scores_for.get_target_contrib_vars())
    • The first argument represents the inputs to the function and should be a list of one symbolic tensor for the activations of each input layer (as for the forward pass), followed by a list of one symbolic tensor for the references of each input layer
    • The second argument represents the output of the function. In the example above, it is a single tensor containing the importance scores of a single layer, but it can also be a list of tensors if you wish to compute the scores for multiple layers at once.
    • Instead of get_target_contrib_vars() which returns the importance scores (in the case of NonlinearMxtsMode.DeepLIFT, these are called "contribution scores"), you can use get_pos_mxts() or get_neg_mxts() to get the multipliers.
  5. Now you are ready to call the function to find the importance scores.

    • Select a specific output layer to compute importance scores with respect to by calling set_active() on the layer.
    • Select a specific target neuron within the layer by calling update_task_index(task_idx) on the layer. Here task_idx is the index of a neuron within the layer.
    • Call the function compiled in step 4 to find the importance scores for the target neuron. Refer to step 4 in the forward pass section for tips on using deeplift.util.run_function_in_batches to do this.
    • Deselect the output layer by calling set_inactive() on the layer. Don't forget this!
    • (Yes, I will bundle all of these into a single function at some point)

More Repositories

1

dragonn

A toolkit to learn how to model and interpret regulatory sequence data using deep learning.
Jupyter Notebook
254
star
2

atac_dnase_pipelines

ATAC-seq and DNase-seq processing pipeline
Python
161
star
3

bpnet

Toolkit to train base-resolution deep neural networks on functional genomics data and to interpret them
Jupyter Notebook
141
star
4

tfmodisco

TF MOtif Discovery from Importance SCOres
Jupyter Notebook
122
star
5

chrombpnet

Bias factorized, base-resolution deep learning models of chromatin accessibility (chromBPNet)
Jupyter Notebook
117
star
6

chipseq_pipeline

AQUAS TF and histone ChIP-seq pipeline
Java
105
star
7

dfim

Deep Feature Interaction Maps (DFIM)
Python
52
star
8

phantompeakqualtools

This package computes informative enrichment and quality measures for ChIP-seq/DNase-seq/FAIRE-seq/MNase-seq data. It can also be used to obtain robust estimates of the predominant fragment length or characteristic tag shift values in these assays.
R
52
star
9

ChromDragoNN

Code for the paper "Integrating regulatory DNA sequence and gene expression to predict genome-wide chromatin accessibility across cellular contexts"
Jupyter Notebook
44
star
10

3DChromatin_ReplicateQC

Software to compute reproducibility and quality scores for Hi-C data
Python
43
star
11

alzheimers_parkinsons

Collaboration with Montine, Chang, and Montgomery labs on Alzheimers / Parkinson's ATAC-seq analysis
Jupyter Notebook
43
star
12

genomelake

Simple and efficient access to genomic data for deep learning models.
Python
43
star
13

simdna

A python library for creating simulated regulatory DNA sequences
Python
38
star
14

abstention

Algorithms for abstention, calibration and domain adaptation to label shift.
Python
36
star
15

coda

Coda: a convolutional denoising algorithm for genome-wide ChIP-seq data
Python
33
star
16

cs273b

CS273B Deep Learning for Genomics Course Materials
Jupyter Notebook
32
star
17

ENCODE_downloader

Downloader for ENCODE
Python
31
star
18

coessentiality

Companion to "A genome-wide almanac of co-essential modules assigns function to uncharacterized genes" (https://doi.org/10.1101/827071)
Python
27
star
19

genomedisco

Software for comparing contact maps from HiC, CaptureC and other 3D genome data.
Jupyter Notebook
25
star
20

training_camp

Genetics training camp
Jupyter Notebook
21
star
21

gkmexplain

Accompanying repository for GkmExplain paper
Jupyter Notebook
21
star
22

fastISM

In-silico Saturation Mutagenesis implementation with 10x or more speedup for certain architectures.
Jupyter Notebook
19
star
23

ataqc

Python
17
star
24

basepairmodels

Python
16
star
25

labelshiftexperiments

Label shift experiments
Jupyter Notebook
15
star
26

seqdataloader

Sequence data label generation and ingestion into deep learning models
Python
12
star
27

bpnet-manuscript

BPNet manuscript code.
Jupyter Notebook
11
star
28

higlass-dynseq

Dynamic sequence track for HiGlass
JavaScript
11
star
29

DeepBindToKeras

Convert DeepBind models to Keras
C
11
star
30

ProCapNet

Repository for modeling PRO-cap data with the BPNet-like model, ProCapNet.
Jupyter Notebook
11
star
31

MPRA-DragoNN

Code accompanying the paper "Deciphering regulatory DNA sequences and noncoding genetic variants using neural network models of massively parallel reporter assays"
Python
11
star
32

mpra

Deep learning MPRAs
Jupyter Notebook
9
star
33

ENCODE_scatac

Python
8
star
34

variant-scorer

A framework to score and analyze variant effects genome-wide using ChromBPNet models
Python
8
star
35

tronn

Transcriptional Regulation (Optimized) Neural Nets (TRoNN)
Python
8
star
36

Cardiogenesis_Repo

Cardiogenesis Repo
Jupyter Notebook
8
star
37

deepbind

I have put my modified version of the deepbind code here.
C
8
star
38

scATAC-reprog

Code for the analysis performed in the paper "Transcription factor stoichiometry, motif affinity and syntax regulate single-cell chromatin dynamics during fibroblast reprogramming to pluripotency" by Nair, Ameen et al.
Jupyter Notebook
7
star
39

veryolddontuse_deeplift_modisco_tutorial

Jupyter Notebook
6
star
40

chromovar3d

Code from the chromatin variation 3d project
JavaScript
6
star
41

chip-nexus-pipeline

ChIP-nexus pipeline
Python
6
star
42

kerasAC

keras accessibility models: code to train, predict, interpret
Python
6
star
43

DART-Eval

Jupyter Notebook
6
star
44

DMSO

Jupyter Notebook
5
star
45

mesoderm

Scripts for dataset processing and QC for the mesoderm differentiation project.
HTML
5
star
46

1kg_ld_utils

utils/notes for LD calculation from 1000 genomes panel
Shell
5
star
47

keras-genomics

Genomics layers for Keras 2
Python
5
star
48

lsgkm-svr

lsgkm+gkmexplain with regression functionality. Builds off kundajelab/lsgkm (which has gkmexplain), which in turn builds off Dongwon-Lee/lsgkm (the original lsgkm repo)
C
5
star
49

yuzu

yuzu is a compressed-sensing based approach for quickly calculating in-silico mutagenesis saliency.
Python
5
star
50

PFBoost

modular 2D boosting code with stabilization and hierarchies
Python
4
star
51

coessentiality-browser

Gene browser using coessentiality and related data
Python
4
star
52

zenodo_upload

Python script to upload files to Zenodo
Python
4
star
53

neural_motif_discovery

Framework for interrogating transcription-factor motifs and their syntax/grammars from neural-network interpretations
Jupyter Notebook
4
star
54

vizsequence

Collecting commonly-repeated sequence visualization code here
Python
4
star
55

av_scripts

A place to track my scripts with git.
Jupyter Notebook
4
star
56

bpnet-refactor

Python
3
star
57

dynseq-pages

3
star
58

locusselect

extraction of data embeddings from deep learning model layers; computation of embedding distance and visualization with umap/tsne
Jupyter Notebook
2
star
59

atlas_resources

A nucleotide-resolution, context-specific sequence annotation of the dynamic regulatory landscape of the human and mouse genomes
Shell
2
star
60

tf_binding_challenge

scoring/ranking code for tf binding challenge
Python
2
star
61

bulk-rna-seq

Pipeline for gecco RNA-seq analysis
Shell
2
star
62

python_reading_group

Jupyter Notebook
2
star
63

higlass-multi-tileset

Multi-tileset data fetcher for HiGlass
JavaScript
2
star
64

retina-models

BPNet models for retina single-cell multiome data
Jupyter Notebook
2
star
65

TF-Atlas

Code repository for the TF-Atlas project
Jupyter Notebook
1
star
66

crispr_safe_targeting_regions

Repository for creating the CRISPR controls termed "safe harbor" regions from Morgens et al., 2017, Nat Comms.
Shell
1
star
67

interpret-benchmark

Benchmarking interpretation methods
Jupyter Notebook
1
star
68

kCCA

Python
1
star
69

momma_dragonn

Flexible deep learning framework
Jupyter Notebook
1
star
70

feature_interactions

Jupyter Notebook
1
star
71

bds_pipeline_modules

BigDataScript (BDS) pipelines and modules
Shell
1
star
72

mseqgen

Multi task batch generator for training deep learning models on CHIP-seq, CHIP-exo, CHIP-nexus, ATAC-seq, RNA-seq (or any other -seq)
Python
1
star
73

SVM_pipelines

Jupyter Notebook
1
star
74

jamboree-toolkit

toolkit for setting up compute environment on gcp for jamborees
Shell
1
star
75

genomics-DL-archsandlosses

A collection of Deep Learning architectures and loss functions from across the genomics literature
Python
1
star
76

affinity_distillation

Jupyter Notebook
1
star
77

SeqPriorizationCATLAS

Sequence priorization using gkm-explain.
Jupyter Notebook
1
star
78

chromBPNet-tutorial

How to train BPNets on ATAC-seq data using the Basepairmodels repo from Stanford's Kundaje Lab.
Shell
1
star
79

PREUSS

PREUSS: predicting RNA editing using sequence and structure
Jupyter Notebook
1
star
80

CTCFMutants

Jupyter Notebook
1
star