• Stars
    star
    182
  • Rank 211,154 (Top 5 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 2 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Genomic Pre-trained Network

GPN (Genomic Pre-trained Network)

hgt_genome_392c4_a47ce0

Code and resources from GPN paper and GPN-MSA paper.

Table of contents

Installation

pip install git+https://github.com/songlab-cal/gpn.git

Minimal usage

import gpn.model
from transformers import AutoModelForMaskedLM

model = AutoModelForMaskedLM.from_pretrained("songlab/gpn-brassicales")
# or
model = AutoModelForMaskedLM.from_pretrained("songlab/gpn-msa-sapiens")

GPN

Can also be called GPN-SS (single sequence).

Examples

  • Play with the model: examples/ss/basic_example.ipynb Open In Colab

Code and resources from specific papers

Training on your own data

  1. Snakemake workflow to create a dataset
    • Can automatically download data from NCBI given a list of accessions, or use your own fasta files.
  2. Training
    • Will automatically detect all available GPUs.
    • Track metrics on Weights & Biases
    • Implemented models: ConvNet, GPNRoFormer (Transformer)
    • Specify config overrides: e.g. --config_overrides n_layers=30
    • Example:
WANDB_PROJECT=your_project torchrun --nproc_per_node=$(echo $CUDA_VISIBLE_DEVICES | awk -F',' '{print NF}') -m gpn.ss.run_mlm --do_train --do_eval \
    --fp16 --report_to wandb --prediction_loss_only True --remove_unused_columns False \
    --dataset_name results/dataset --tokenizer_name gonzalobenegas/tokenizer-dna-mlm \
    --soft_masked_loss_weight_train 0.1 --soft_masked_loss_weight_evaluation 0.0 \
    --weight_decay 0.01 --optim adamw_torch \
    --dataloader_num_workers 16 --seed 42 \
    --save_strategy steps --save_steps 10000 --evaluation_strategy steps \
    --eval_steps 10000 --logging_steps 10000 --max_steps 120000 --warmup_steps 1000 \
    --learning_rate 1e-3 --lr_scheduler_type constant_with_warmup \
    --run_name your_run --output_dir your_output_dir --model_type ConvNet \
    --per_device_train_batch_size 512 --per_device_eval_batch_size 512 --gradient_accumulation_steps 1 \
    --torch_compile
  1. Extract embeddings
    • Input file requires chrom, start, end
    • Example:
torchrun --nproc_per_node=$(echo $CUDA_VISIBLE_DEVICES | awk -F',' '{print NF}') -m gpn.ss.get_embeddings windows.parquet genome.fa.gz 100 your_output_dir \
    results.parquet --per-device-batch-size 4000 --is-file --dataloader-num-workers 16
  1. Variant effect prediction
    • Input file requires chrom, pos, ref, alt
    • Example:
torchrun --nproc_per_node=$(echo $CUDA_VISIBLE_DEVICES | awk -F',' '{print NF}') -m gpn.ss.run_vep variants.parquet genome.fa.gz 512 your_output_dir results.parquet \
    --per-device-batch-size 4000 --is-file --dataloader-num-workers 16

GPN-MSA

Examples

  • Play with the model: examples/msa/basic_example.ipynb
  • Variant effect prediction: examples/msa/vep.ipynb
  • Training (human): examples/msa/training.ipynb

Code and resources from specific papers

Training on other species (e.g. plants)

Under construction.

Citation

GPN:

@article{benegas2023dna,
    author = {Gonzalo Benegas  and Sanjit Singh Batra  and Yun S. Song },
    title = {DNA language models are powerful predictors of genome-wide variant effects},
    journal = {Proceedings of the National Academy of Sciences},
    volume = {120},
    number = {44},
    pages = {e2311219120},
    year = {2023},
    doi = {10.1073/pnas.2311219120},
    URL = {https://www.pnas.org/doi/abs/10.1073/pnas.2311219120},
    eprint = {https://www.pnas.org/doi/pdf/10.1073/pnas.2311219120},
}

GPN-MSA:

@article{benegas2023gpnmsa,
	author = {Gonzalo Benegas and Carlos Albors and Alan J. Aw and Chengzhong Ye and Yun S. Song},
	title = {GPN-MSA: an alignment-based DNA language model for genome-wide variant effect prediction},
	elocation-id = {2023.10.10.561776},
	year = {2023},
	doi = {10.1101/2023.10.10.561776},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2023/10/11/2023.10.10.561776},
	eprint = {https://www.biorxiv.org/content/early/2023/10/11/2023.10.10.561776.full.pdf},
	journal = {bioRxiv}
}

More Repositories

1

tape

Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology.
Python
651
star
2

tape-neurips2019

Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology. (DEPRECATED)
Python
118
star
3

factored-attention

This repository contains code for reproducing results in our paper Interpreting Potts and Transformer Protein Models Through the Lens of Simplified Attention
Jupyter Notebook
56
star
4

mogwai

Python
25
star
5

rna-sieve

A library for the deconvolution of bulk cell samples using single-cell RNA expression data.
Mathematica
19
star
6

CPT

Cross-protein transfer learning for variant effect prediction
Jupyter Notebook
17
star
7

scquint

Jupyter Notebook
15
star
8

swga2

Python
13
star
9

CherryML

Scalable Maximum Likelihood Estimation of Phylogenetic Models
Python
8
star
10

scquint-analysis

Python
6
star
11

MultiCluster

Software for three-way clustering of multi-tissue multi-individual gene expression data using semi-nonnegative tensor decomposition
MATLAB
4
star
12

contact-geometry

Python
4
star
13

MOCHIS

One- and two-sample tests against general families of alternatives
Mathematica
3
star
14

HiDENSEC

Code for Tracing cancer evolution and heterogeneity using Hi-C
Jupyter Notebook
2
star
15

flinty

A Simple and Flexible Test of Sample Exchangeability with Applications to Statistical Genomics
Python
2
star
16

slc22a5

Variant effect prediction for SLC22A5 transporter gene using Potts models
Jupyter Notebook
1
star
17

EGGTART

Extensive GUI gives TASEP realization in real-time.
1
star