• Stars
    star
    136
  • Rank 267,670 (Top 6 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created over 6 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Protein-compound affinity prediction through unified RNN-CNN

DeepAffinity: Intro

Drug discovery demands rapid quantification of compound-protein interaction (CPI). However, there is a lack of methods that can predict compound-protein affinity from sequences alone with high applicability, accuracy, and interpretability. We present a integration of domain knowledges and learning-based approaches. Under novel representations of structurally-annotated protein sequences, a semi-supervised deep learning model that unifies recurrent and convolutional neural networks has been proposed to exploit both unlabeled and labeled data, for jointly encoding molecular representations and predicting affinities. Performances for new protein classes with few labeled data are further improved by transfer learning. Furthermore, novel attention mechanisms are developed and embedded to our model to add to its interpretability. Lastly, alternative representations using protein sequences or compound graphs and a unified RNN/GCNN-CNN model using graph CNN (GCNN) are also explored to reveal algorithmic challenges ahead.

DeepAffinity: interpretable deep learning of compound–protein affinity through unified recurrent and convolutional neural networks

(What has happened since DeepAffinity? Please check out our latest work: Explainable Deep Relational Networks for Predicting Compound–Protein Affinities and Contacts.

  • More DeepAffinity variants (such as hierarchical RNN for protein amino-acid sequence and GCN/GIN for compound graphs)
  • Much more interpretable DeepAffinity+ with regularized and supervised attentions as well as DeepRelations with intrinsically explainable model architecture
  • Demonstration of how interpretability helps in drug discovery: binding site prediction, ligand docking, and structure activity relationship (SAR; such as ligand scoring and lead optimization)

We have not released the code but have already done so for the data labeled with both the affinity and the explanation of the affinity (binary residue-atom contacts).)

Training DeepAffinity: Illustration

Training-Process

Pre-requisite:

  • Tensorflow-gpu v1.1
  • Python 3.6
  • TFLearn v0.3
  • Scikit-learn v0.19
  • Anaconda 3/5.0.0.1
  • We have already provided our environment list as environment.yml. You can create your own environment by:
conda env create -n envname -f environment.yml

Table of contents:

  • data_script: Contain the supervised learning datasets(pIC50, pKi, pEC50, and pKd)
  • Seq2seq_models: Contain auto-encoder seq2seq models and their data for both SPS and SMILE representations
  • baseline_models: Contain shallow models for both Pfam/pubchem features and features generated from the encoder part of seq2seq model
  • Separate_models: Contain deep learning model for features generated from the encoder part of seq2seq model
  • Joint_models: Contain all the joint models including:
    • Separate attention mechanism
    • Marginalized attention mechanism
    • Joint attention mechanism
    • Graph convolution neural network (GCNN) with separate attention mechanism
  • (Update: Apr. 22, 2021) data_DeepRelations: A newly curated dataset for explainabe prediction of compound-protein affinities, containing 4446 pairs between 3672 compounds and 1287 proteins, labeled with both inter-molecular affinity (pKd/pKi) and residue-atom contacts/interactions.

Testing the model

To test DeepAffinity for new dataset, please follow the steps below:

  • Download the checkpoints trained based on training set of IC50 from the following link
  • Download the checkpoints trained based on the whole dataset of IC50 from the following link
  • Download the checkpoints trained based on the whole dataset of Kd from the following link
  • Download the checkpoints trained based on the whole dataset of Ki from the following link
  • Put your data in folder "Joint_models/joint_attention/testing/data"
  • cd Joint_models/joint_attention/testing/
  • Run the Python code joint-Model-test.py

You may use the script to run our model in one command. The details can be found in our manual (last updated: Apr. 9, 2020).

(Apr. 27, 2021) If the number of testing pairs in the input is below 64 (batch size), the script returns an error (InvalidArgumentError ... ConCat0p : Dimensions of inputs should match: ...). Such rigidity is unfortunately due to TFLEARN. An easy get around is to "pad" the input file to reach at least 64 pairs, using arbitrary compound-protein inputs.

(Aug. 21, 2020) We are now providing SPS (Structure Property-annotated Sequence) for all human proteins! zip (Credit: Dr. Tomas Babak at Queens University). Columns: 1. Gene identifier 2. Protein FASTA 3. SS (Scratch) 4. SS8 (Scratch) 5. acc (Scratch) 6. acc20 7. SPS

P.S. Considering the distribution of protein sequence lengths in our training data, our trained checkpoints are recommended for proteins of lengths between tens and 1500.

Re-training the seq2seq models for new dataset:

(Added on Jan. 18, 2021) To re-train the seq2seq models for new dataset, please follow the steps below:

  • Use the translate.py function in any of the seq2seq models with the following important arguments:
    • data_dir: data directory where includes all the data
    • train_dir: training directory where all the checkpoints will be saved in.
    • from_train_data: source training data which will be translated from.
    • to_train_data: target training data which will be translated to (can be the same with from_train_data if doing auto-encoding which we used in the paper).
    • from_dev_data: source validation data which will be translated from.
    • to_dev_data: target validation data which will be translated to (can be the same with from_dec_data if doing auto-encoding which we used in the paper).
    • num_layers: Number of RNN layers (default 2)
    • batch_size: Batch size (default 256)
    • num_train_step: number of training steps (default 100K)
    • size: the size of hidden dimension for RNN models (default 256)
    • SPS_max_length (SMILE_max_length): maximum length of SPS (SMILE)
  • Suggestion for using seq2seq models:
    • For protein encoding: seq2seq_part_FASTA_attention_fw_bw
    • For compound encoding: seq2seq_part_SMILE_attention_fw_bw
  • Example of running for proteins: python translate.py --data_dir ./data --train_dir ./checkpoints --from_train_data ./data/FASTA_from.txt --to_train_data ./data/FASTA_to.txt --from_dev_data ./data/FASTA_from_dev.txt --to_dev_data ./data/FASTA_to_dev.txt --SPS_max_length 152
  • Once the training is done, you should copy the parameters' weights cell_*.txt, embedding_W.txt, *_layer_states.txt in the joint_attention/joint_fixed_RNN/data/prot_init which will be used for the next step, supervised training in the joint attention model (you can do the same for separate and marginalized attention models as well)

Note:

We recommend referring to PubChem for canonical SMILES for compounds.

Citation:

If you find the code useful for your research, please consider citing our paper:

@article{karimi2019deepaffinity,
  title={DeepAffinity: interpretable deep learning of compound--protein affinity through unified recurrent and convolutional neural networks},
  author={Karimi, Mostafa and Wu, Di and Wang, Zhangyang and Shen, Yang},
  journal={Bioinformatics},
  volume={35},
  number={18},
  pages={3329--3338},
  year={2019},
  publisher={Oxford University Press}
}

Contacts:

Yang Shen: [email protected]

Di Wu: [email protected]

Mostafa Karimi: [email protected]

More Repositories

1

GraphCL

[NeurIPS 2020] "Graph Contrastive Learning with Augmentations" by Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen
Python
548
star
2

GraphCL_Automated

[ICML 2021] "Graph Contrastive Learning Automated" by Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang; [WSDM 2022] "Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
Python
108
star
3

SS-GCNs

[ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
Python
107
star
4

gcWGAN

Guided Conditional Wasserstein GAN for De Novo Protein Design
Roff
37
star
5

TALE

Transformer-based protein function Annotation with joint feature-Label Embedding
Python
31
star
6

LDM-3DG

[ICLR 2024] "Latent 3D Graph Diffusion" by Yuning You, Ruida Zhou, Jiwoong Park, Haotian Xu, Chao Tian, Zhangyang Wang, Yang Shen
Python
30
star
7

Drug-Combo-Generator

Deep Generative Models for Drug Combination (Graph Set) Generation given Hierarchical Disease Network Embedding
Python
29
star
8

GDA-SpecReg

[ICLR 2023] "Graph Domain Adaptation via Theory-Grounded Spectral Regularization" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
Python
22
star
9

LOIS

[NeurIPS 2019] LOIS: Learning to Optimize In Swarms, guided by posterior estimation
C++
17
star
10

BAL

Bayesian Active Learning for Optimization and Uncertainty Quantification with Applications to Protein Docking
C++
13
star
11

CPAC

[Bioinformatics 2022] Cross-Modality and Self-Supervised Protein Embedding for Compound-Protein Affinity and Contact Prediction
Python
13
star
12

EGCN

Energy-based Graph Convolutional Networks for Scoring Protein Docking Models
Python
11
star
13

Bayesian-L2O

[ICLR 2022] "Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How" by Yuning You, Yue Cao, Tianlong Chen, Zhangyang Wang, Yang Shen
Python
10
star
14

ncVarPred-1D3D

Multimodal learning of noncoding variant effects using genome sequence and chromatin structure
Python
4
star
15

Signals-and-Systems

Signals and Systems Honors Projects
4
star
16

WSR-PredictPofPathogenicity

A weakly supervised regression approach to directly predict the probability of pathogenicity based on categorized pathogenicity classes
Python
4
star
17

cNMA

Encounter Complex-based Normal Mode Analysis for Proteins
Python
4
star
18

Shen-Lab.github.io

HTML
2
star
19

Rev_scripts_MutEffect

1
star