• Stars
    star
    161
  • Rank 233,433 (Top 5 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 8 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

ATAC-seq and DNase-seq processing pipeline

This pipeline has been deprecated as of June 2018. Please update your pipelines to the WDL-based pipeline at https://github.com/ENCODE-DCC/atac-seq-pipeline

ATAC-Seq / DNase-Seq Pipeline

DOI

This pipeline is designed for automated end-to-end quality control and processing of ATAC-seq or DNase-seq data. The pipeline can be run on compute clusters with job submission engines or stand alone machines. It inherently makes uses of parallelized/distributed computing. Pipeline installation is also easy as most dependencies are automatically installed. The pipeline can be run end-to-end i.e. starting from raw FASTQ files all the way to peak calling and signal track generation; or can be started from intermediate stages as well (e.g. alignment files). The pipeline supports single-end or paired-end ATAC-seq or DNase-seq data (with or without replicates). The pipeline produces pretty HTML reports that include quality control measures specifically designed for ATAC-seq and DNase-seq data, analysis of reproducibility, stringent and relaxed thresholding of peaks, fold-enrichment and pvalue signal tracks. The pipeline also supports detailed error reporting and easy resuming of runs. The pipeline has been tested on human, mouse and yeast ATAC-seq data and human and mouse DNase-seq data.

The ATAC-seq pipeline specification is also the official pipeline specification of the Encyclopedia of DNA Elements (ENCODE) consortium. The ATAC-seq pipeline protocol definition is here. Some parts of the ATAC-seq pipeline were developed in collaboration with Jason Buenrostro, Alicia Schep and Will Greenleaf at Stanford.

The DNase-seq pipeline specification is here. Note that this is NOT the same as the official ENCODE DNase-seq pipeline (which is based on John Stam lab's processing pipeline).

Installation

Install software/database in a correct order according to your system. For example on Kundaje lab's clusters, you only need to install one software Pipeline.

Java

Install Java 8 (jdk >= 1.8 or jre >= 1.8) on your system. If you don't have super-user privileges on your system, locally install it and add it to your $PATH.

  • For Debian/Ubuntu (>14.10) based Linux:

    $ sudo apt-get install git openjdk-8-jre
    
  • For Fedora/Red-Hat based Linux:

    $ sudo yum install git java-1.8.0-openjdk
    
  • For Ubuntu 14.04 (trusty):

    $ sudo add-apt-repository ppa:webupd8team/java -y
    $ sudo apt-get update
    $ sudo apt-get install oracle-java8-installer
    

Conda

REMOVE ANY ANACONDA OR OTHER VERSIONS OF CONDA FROM YOUR BASH STARTUP SCRIPT. WE CANNOT GUARANTEE THAT PIPELINE WORKS WITH OTHER VERSIONS OF CONDA. ALSO REMOVE R AND OTHER CONFLICTING MODULES FROM IT TOO. Remove any other Anaconda from your $PATH. Check your loaded modules with $ module list and unload any Anaconda modules in your bash startup scripts ($HOME/.bashrc or $HOME/.bash_profile). Add unset PYTHONPATH to your bash start up scripts.

Install Miniconda3 latest on your system. IMPORTANT Make sure that the absolute path of the destination directory is short. Long path will cause an error in the depenecies installation step issue #8.

$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
$ bash Miniconda3-latest-Linux-x86_64.sh

Answer yes for the final question. If you choose no, you need to manually add Miniconda3 to your $HOME/.bashrc.

Do you wish the installer to prepend the Miniconda3 install location
to PATH in your /your/home/.bashrc ? [yes|no]
[no] >>> yes

Open a new terminal after installation.

BigDataScript

Install BigDataScript v0.99999e (forked) on your system. The original BDS v0.99999e does not work correctly with the pipeline (see PR #142 and issue #131).

$ wget https://github.com/leepc12/BigDataScript/blob/master/distro/bds_Linux.tgz?raw=true -O bds_Linux.tgz
$ mv bds_Linux.tgz $HOME && cd $HOME && tar zxvf bds_Linux.tgz

Add export PATH=$PATH:$HOME/.bds to your $HOME/.bashrc. If Java memory occurs, add export _JAVA_OPTIONS="-Xms256M -Xmx728M -XX:ParallelGCThreads=1" too.

Pipeline

Get the latest version of the pipeline. Don't forget to add --recursive. ATAC-Seq pipeline uses modules from an external git repo (ataqc). ATAQC will not work correctly without --recursive.

$ git clone https://github.com/kundajelab/atac_dnase_pipelines --recursive

Dependencies

Install software dependencies automatically. It will create two conda environments (bds_atac and bds_atac_py3) under your conda.

$ bash install_dependencies.sh

If you don't use install_dependencies.sh, manually replace BDS's default bds.config with a correct one:

$ cp bds.config ./utils/bds_scr $HOME/.bds

If install_dependencies.sh fails, run ./uninstall_dependencies.sh, fix problems and then try bash install_dependencies.sh again.

Genome data

Install genome data for a specific genome [GENOME]. Currently hg19, mm9, hg38 and mm10 are available. Specify a directory [DATA_DIR] to download genome data. A species file generated on [DATA_DIR] will be automatically added to your ./default.env so that the pipeline knows that you have installed genome data using install_genome_data.sh. If you want to install multiple genomes make sure that you use the same directory [DATA_DIR] for them. Each genome data will be installed on [DATA_DIR]/[GENOME]. If you use other BDS pipelines, it is recommended to use the same directory [DATA_DIR] to save disk space.

IMPORTANT: install_genome_data.sh can take longer than an hour for downloading data and building index. DO NOT run the script on a login node, use qlogin for SGE and srun --pty bash for SLURM.

# DO NOT run this on a login node
$ bash install_genome_data.sh [GENOME] [DATA_DIR]

If you have super-user privileges on your system, it is recommended to install genome data on /your/data/bds_pipeline_genome_data and share them with others.

$ sudo su
$ bash install_genome_data.sh [GENOME] /your/data/bds_pipeline_genome_data

You can find a species file [SPECIES_FILE] on /your/data/bds_pipeline_genome_data for each pipeline type. Then others can use the genome data by adding -species_file [SPECIES_FILE_PATH] to the pipeline command line. Or they need to add species_file = [SPECIES_FILE_PATH] to the section [default] in their ./default.env.

Installation for internet-free computers

The pipeline does not need internet connection but installers (install_dependencies.sh and install_genome_data.sh) do need it. So the workaround should be that first install dependencies and genome data on a computer that is connected to the internet and then move Conda and genome database directories to your internet-free one. Both computers should have THE SAME LINUX VERSION.

  • On your computer that has an internet access,

    • Follow the installation instruction for general computers
    • Move your Miniconda3 directory to $HOME/miniconda3 on your internet-free computer.
    • Move your genome database directory, which has bds_atac_species.conf and directories per species, to $HOME/genome_data on your internet-free computer. $HOME/genome_data on your internet-free computer should have bds_atac_species.conf.
    • Move your BDS directory $HOME/.bds to $HOME/.bds on your internet-free computer.
    • Move your pipeline directory atac_dnase_pipelines/ to $HOME/atac_dnase_pipelines/ on your internet-free computer.
  • On your internet-free computer,

    • Add your miniconda3/bin and BDS binary to $PATH in your bash initialization script ($HOME/.bashrc or $HOME/.bash_profile).

      export PATH="$PATH:$HOME/miniconda3/bin"
      export PATH="$PATH:$HOME/.bds"
      
    • Modify [default] section in $HOME/atac_dnase_pipelines/default.env.

      [default]
      conda_bin_dir=$HOME/miniconda3/bin
      species_file=$HOME/genome_data/bds_atac_species.conf
      
  • Modify all paths in $HOME/genome_data/bds_atac_species.conf so that they correctly point to the right files.

  • Check BDS version.

    $ bds -version
    Bds 0.99999e (build 2016-08-26 06:34), by Pablo Cingolani
    
  • Make sure that your java rumtime version is >= 1.8.

    $ java -version
    java version "1.8.0_111"
    Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
    Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
    

Usage

We recommend using BASH to run pipelines.

For general use, use the following command line: (for PE data set)

$ bds atac.bds -species [SPECIES; hg19, mm9, ... ] -nth [NUM_THREADS] -fastq1_1 [READ1] -fastq1_2 [READ2]

For DNase-seq: (it's NOT -dnase!)

-dnase_seq

The pipeline does not trim adapters by default. To automatically detect adapters,

-auto_detect_adapter

To specify adapter for each fastq, add -adapter[REPLICATE_ID]_[PAIR_ID]:[POOL_ID] for paired end dataset and -adapter[REPLICATE_ID]:[POOL_ID] for single ended one. You can skip :[POOL_ID] if you have single fastq per replicate (SE) or single pair of fastqs per replicate (PE).

IMPORTANT! If your data set is SINGLE ENDED add the following to the command line, otherwise the pipeline works for PE by default.

-se 

You can also individually specify endedness for each replicate.

-se[REPLICATE_ID] 	# for exp. replicates, 
-se1 -pe2 -se3 ...

If you want to just align your data (no peak calling or further steps like IDR).

-align

If you don't want ATAQC, add the following to command line.

-no_ataqc 

If you have just one replicate (PE), specify fastqs with -fastq[REP_ID]_[PAIR_ID].

-fastq1_1 [READ_PAIR1] -fastq1_2 [READ_PAIR2] \

For multiple replicates (PE), specify fastqs with -fastq[REP_ID]_[PAIR_ID]. Add -fastq[]_[] for each replicate and pair to the command line:replicates.

-fastq1_1 [READ_REP1_PAIR1] -fastq1_2 [READ_REP1_PAIR2] -fastq2_1 [READ_REP2_PAIR1] -fastq2_2 [READ_REP2_PAIR2] ...

For multiple replicates (SE), specify fastqs with -fastq[REP_ID]:

-se -fastq1 [READ_REP1] -fastq2 [READ_REP2] ...

You can also specify an adapter to be trimmed for each fastq. Example: -adapter1_1 [ADAPTER1_1] -adapter1_2 [ADAPTER1_2] ... for PE or -adapter1 [ADAPTER1] -adapter2 [ADAPTER2] ....

You can start from bam files. There are two kinds of bam files (raw or deduped) and you need to explicitly choose between raw bam (bam) and deduped one (filt_bam) with -input [BAM_TYPE]. Don't forget to add -se if they are not paired end (PE). For raw bams,

-bam1 [RAW_BAM_REP1] -bam2 [RWA_BAM_REP1] ...

For deduped (filtered) bams:

-filt_bam1 [FILT_BAM_REP1] -filt_bam2 [FILT_BAM_REP1] ...

For tagaligns (non-tn5-shifted):

-tag1 [TAGALIGN_REP1] -tag2 [TAGALIGN_REP2] ...

You can also mix up any data types.

-bam1 [RAW_BAM_REP1] -tag2 [TAGALIGN_REP2] ...

To subsample beds (tagaligns) add the following to the command line. This is different from subsampling for cross-corr. analysis. Peaks will be called with subsampled tagaligns.

-subsample [NO_READS_TO_SUBSAMPLE]

To change # of lines to subsample for cross-corr. analysis. This will not affect tasks downstream (peak calling and IDR).

-nreads [NO_READS_TO_SUBSAMPLE]

To disable pseudo replicate generation, add the following. By default, peak calling and IDR will be done for true replicates and pseudo replicates, but if you have -true_rep in the command line, you will also get IDR on true replicates only.

-true_rep

IDR analysis is optional in the pipeline by default. If there are more than two replicates, IDR will be done for every pair of replicates. to enable IDR add the following:

-enable_idr

For multimapping, (multimapping is disabled by default). Using this parameter implies -mapq_thresh 30 (MapQ Thresh for the pipeline is fixed at 30).

-multimapping [MULTIMAPPING; use 4 for ENCODE samples]

To force a set of parameters (-smooth_win 73 -idr_thresh 0.05 -multimapping 4) for ENCODE3.

-ENCODE3

You can also define parameters in a configuration file. Key names in a configruation file are identical to parameter names on command line.

$ bds atac.bds [CONF_FILE]

$ cat [CONF_FILE]
species = [SPECIES; hg19, mm9, ...]
nth   = [NUM_THREADS]
fastq1_1= [READ1]
fastq1_2= [READ2]
...

List of all parameters

To list all parameters and default values for them,

$ bds atac.bds

Stopping / Resuming pipeline

Press Ctrl + C on a terminal or send any kind of kill signals to it. Please note that this will delete all intermediate files and incomplete outputs for the running tasks. The pipeline automatically determines if each task has finished or not (by comparing timestamps of input/output files for each task). To run the pipeline from the point of failure, correct error first and then just run the pipeline with the same command that you started the pipeline with. There is no additional parameter for restarting the pipeline.

Running pipelines with a cluster engine

On servers with a cluster engine (such as Sun Grid Engine and SLURM), DO NOT QSUB/SBATCH BDS COMMAND LINE. Run BDS command directly on login nodes. BDS is a task manager and it will automatically submit(qsub/sbatch) and manage its sub tasks.

IMPORTANT! Please read this section carefully if you run pipelines on Stanford SCG4 and Sherlock cluster.

Most clusters have a policy to limit number of threads and memory per user on a login node. One BDS process, as a Java-based task manager, takes up to 1GB of memory and 50 threads even though it just submits/monitors subtasks. So if you want to run more than 50 pipelines in parallel, your cluster will kill BDS processes due to resource limit on a login node (check resource limit per user with ulimit -a). For example of 50 pipelines, 50 GB of memory and 2500 threads will be taken by 50 BDS processes. So the Workaround for this is to make an interactive node to keep all BDS processes alive. Such interactive node must have long walltime enough to wait for all pipelines in it to finish. Recommended resource setting is 0.5GB memory per pipeline.

SGE example to make an interactive node for 100 pipelines: 1 cpu, 100GB memory, 3 days walltime.

$ qlogin -l h_rt=72:00:00 -l h_vmem=100G

SLURM example to make an interactive node for 100 pipelines: 1 cpus, 100GB memory, 3 days walltime.

$ srun -n 1 --mem 100G -t 3-0 -p [YOUR_PARTITON] --qos normal --pty /bin/bash -i -l 

Once you get an interactive node, repeat the following commands per sample to run a pipeline with using bds_scr. Add -q_for_slurm_account to the command line to use the parameter -q for SLURM account (sbatch --acount) instead of partition (sbatch -p).

$ cd [WORK_DIR]
$ bds_scr [SCREEN_NAME] [LOG_FILE_PATH] atac.bds -system [CLUSTER_ENGINE: slurm or sge] -q [SGE_QUEUE_OR_SLURM_PARTITION] -nth [MAX_NUM_THREAD_PER_PIPELINE] ...
$ sleep 2 # wait for 2 seconds for safety

Then you can monitor your pipelines with screen -ls and tail -f [LOG_FILE_PATH]. If you want to run more than 200 pipelines, you would want to make multiple interactive nodes and distribute your samples to them.

Parallelization

For completely serialized jobs, add -no_par to the command line. Individual tasks can still go multi-threaded.

IMPORTANT! You can set up a limit for total number of threads with -nth [MAX_TOTAL_NO_THREADS]. Total number of threads used by a pipeline will not exceed this limit.

Default -nth for each cluster is defined on ./default.env (e.g. 16 on SCG and 8 on Kundaje lab cluster)

The pipeline automatically distributes [MAX_TOTAL_NO_THREADS] threads for jobs according to corresponding input file sizes. For example of two fastqs (1GB and 2GB) with -nth 6, 2 and 4 threads are allocated for aligning 1GB and 2GB fastqs, respectively. The same policy applies to other multi-threaded tasks like deduping and peak calling.

However, all multi-threaded tasks (like bwa, bowtie2, spp and macs2) still have their own max. memory (-mem_APPNAME [MEM_APP]) and walltime (-wt_APPNAME [WALLTIME_APP]) settings. Max. memory is NOT PER CPU. For example on Kundaje lab cluster (with SGE flag activated bds -s sge bds_atac.bds ...) or on SCG4, the actual shell command submitted by BDS for each task is like the following:

qsub -V -pe shm [NTH_ALLOCATED_FOR_APP] -h_vmem=[MEM_APP]/[NTH_ALLOCATED_FOR_APP] -h_rt=[WALLTIME_APP] -s_rt=[WALLTIME_APP] ...

This ensures that total memory reserved for a cluster job equals to [MEM_APP]. The same policy applies to SLURM.

Specifying a cluster queue/partition

You can specifiy a queue [QUEUE_NAME] on Sun Grid Engine or partition/account on SLURM. But you cannot specify both account and partition at the same time for SLURM. You can skip -q_for_slurm_account on Stanford SCG cluster since the pipeline will automatically detect SCG servers and add it.

bds atac.bds -system sge -q [SGE_QUEUE_NAME] ...
bds atac.bds -system slurm -q [SLURM_PARTITON_NAME] ... # Sherlock example
bds atac.bds -system slurm -q_for_slurm_account -q [SLURM_ACCOUNT_NAME] ... # SCG example

Managing multiple pipelines

./utils/bds_scr is a BASH script to create a detached screen for a BDS script and redirect stdout/stderr to a log file [LOG_FILE_NAME]. If a log file already exists, stdout/stderr will be appended to it.

Monitor the pipeline with tail -f [LOG_FILE_NAME]. The only difference between bds_scr and bds is that you have [SCR_NAME] [LOG_FILE_NAME] between bds command and its parameters (or a BDS script name).

You can skip [LOG_FILE_NAME] then a log file [SCR_NAME].log will be generated on the working directory.

You can also add any BDS parameters (like -dryRun, -d and -s). The following example is for running a pipeline on Sun Grid Engine.

$ bds_scr [SCR_NAME] [LOG_FILE_NAME] atac.bds ...
$ bds_scr [SCR_NAME] atac.bds ...
$ bds_scr [SCR_NAME] -s sge atac.bds ...

Once the pipeline run is done, the screen will be automatically closed. To kill a pipeline manually while it's running, use ./utils/kill_scr or screen -X quit:

$ screen -X -S [SCR_NAME] quit
$ kill_scr [SCR_NAME]

Java issues (memory and temporary directory)

Picard tools is used for marking dupes in the reads and it's based on Java. If you see any Java heap space errors then increase memory limit for Java with -mem_ataqc [MEM] (default: 20G) and -mem_dedup [MEM] (default: 12G).

If your /tmp quickly fills up and you want to change temporary directory for all Java apps in the pipeline, then add the following line to your bash startup script ($HOME/.bashrc). Our pipeline takes in $TMPDIR (not $TMP) for all Java apps.

export TMPDIR=/your/temp/dir/

Another quick workaround for dealing with Java issues is not to use Picard tools in the pipeline. Add -use_sambamba_markdup to your command line and then you can use sambamba markdup instead of picard markdup.

How to customize genome data installer?

Please refer to the section Installer for genome data on BDS pipeline programming.

Useful HTML reports

There are two kinds of HTML reports provided by the pipeline.

  • BigDataScript HTML report for debugging: Located at the working folder with name atac_[TIMESTAMP]_report.html. This report is automatically generated by BigDataScript and is useful for debugging since it shows summary, timeline, Stdout and Stderr for each job.

  • ATAC-Seq pipeline report for QC and result: The pipeline automatically generate a nice HTML report (Report.html) on its output directory (specified with -out_dir or just './out'). It summarizes files and directory structure, includes QC reports and show a workflow diagram and genome browser tracks for peaks and signals (bigwigs for pValue and fold change). Move your output directory to a web directory (for example, /var/www/somewhere) or make a softlink of it to a web directory. For genome browser tracks, specify your web directory root for your output While keeping its structure. Make sure that you have bgzip and tabix installed on your system. Add the following to the command line:

    -url_base http://your/url/to/output -title [PREFIX_FOR_YOUR_REPORT]
    

Output directory structure and file naming

For more details, refer to the file table section in an HTML report generated by the pipeline. Files marked as (E) are outputs to be uploaded during ENCODE accession.

out                               # root dir. of outputs
β”‚
β”œ *report.html                    #  HTML report
β”œ *tracks.json                    #  Tracks datahub (JSON) for WashU browser
β”œ ENCODE_summary.json             #  Metadata of all datafiles and QC results
β”‚
β”œ align                           #  mapped alignments
β”‚ β”œ rep1                          #   for true replicate 1 
β”‚ β”‚ β”œ *.trim.fastq.gz             #    adapter-trimmed fastq
β”‚ β”‚ β”œ *.bam                       #    raw bam
β”‚ β”‚ β”œ *.nodup.bam (E)             #    filtered and deduped bam
β”‚ β”‚ β”œ *.tagAlign.gz               #    tagAlign (bed6) generated from filtered bam
β”‚ β”‚ β”œ *.tn5.tagAlign.gz           #    TN5 shifted tagAlign for ATAC pipeline (not for DNase pipeline)
β”‚ β”‚ β”” *.*M.tagAlign.gz            #    subsampled tagAlign for cross-corr. analysis
β”‚ β”œ rep2                          #   for true repilicate 2
β”‚ ...
β”‚ β”œ pooled_rep                    #   for pooled replicate
β”‚ β”œ pseudo_reps                   #   for self pseudo replicates
β”‚ β”‚ β”œ rep1                        #    for replicate 1
β”‚ β”‚ β”‚ β”œ pr1                       #     for self pseudo replicate 1 of replicate 1
β”‚ β”‚ β”‚ β”œ pr2                       #     for self pseudo replicate 2 of replicate 1
β”‚ β”‚ β”œ rep2                        #    for repilicate 2
β”‚ β”‚ ...                           
β”‚ β”” pooled_pseudo_reps            #   for pooled pseudo replicates
β”‚   β”œ ppr1                        #    for pooled pseudo replicate 1 (rep1-pr1 + rep2-pr1 + ...)
β”‚   β”” ppr2                        #    for pooled pseudo replicate 2 (rep1-pr2 + rep2-pr2 + ...)
β”‚
β”œ peak                             #  peaks called
β”‚ β”” macs2                          #   peaks generated by MACS2
β”‚   β”œ rep1                         #    for replicate 1
β”‚   β”‚ β”œ *.narrowPeak.gz            #     narrowPeak (p-val threshold = 0.01)
β”‚   β”‚ β”œ *.filt.narrowPeak.gz (E)   #     blacklist filtered narrowPeak 
β”‚   β”‚ β”œ *.narrowPeak.bb (E)        #     narrowPeak bigBed
β”‚   β”‚ β”œ *.narrowPeak.hammock.gz    #     narrowPeak track for WashU browser
β”‚   β”‚ β”œ *.pval0.1.narrowPeak.gz    #     narrowPeak (p-val threshold = 0.1)
β”‚   β”‚ β”” *.pval0.1.*K.narrowPeak.gz #     narrowPeak (p-val threshold = 0.1) with top *K peaks
β”‚   β”œ rep2                         #    for replicate 2
β”‚   ...
β”‚   β”œ pseudo_reps                          #   for self pseudo replicates
β”‚   β”œ pooled_pseudo_reps                   #   for pooled pseudo replicates
β”‚   β”œ overlap                              #   naive-overlapped peaks
β”‚   β”‚ β”œ *.naive_overlap.narrowPeak.gz      #     naive-overlapped peak
β”‚   β”‚ β”” *.naive_overlap.filt.narrowPeak.gz #     naive-overlapped peak after blacklist filtering
β”‚   β”” idr                           #   IDR thresholded peaks
β”‚     β”œ true_reps                   #    for replicate 1
β”‚     β”‚ β”œ *.narrowPeak.gz           #     IDR thresholded narrowPeak
β”‚     β”‚ β”œ *.filt.narrowPeak.gz (E)  #     IDR thresholded narrowPeak (blacklist filtered)
β”‚     β”‚ β”” *.12-col.bed.gz           #     IDR thresholded narrowPeak track for WashU browser
β”‚     β”œ pseudo_reps                 #    for self pseudo replicates
β”‚     β”‚ β”œ rep1                      #    for replicate 1
β”‚     β”‚ ...
β”‚     β”œ optimal_set                 #    optimal IDR thresholded peaks
β”‚     β”‚ β”” *.filt.narrowPeak.gz (E)  #     IDR thresholded narrowPeak (blacklist filtered)
β”‚     β”œ conservative_set            #    optimal IDR thresholded peaks
β”‚     β”‚ β”” *.filt.narrowPeak.gz (E)  #     IDR thresholded narrowPeak (blacklist filtered)
β”‚     β”œ pseudo_reps                 #    for self pseudo replicates
β”‚     β”” pooled_pseudo_reps          #    for pooled pseudo replicate
β”‚
β”‚   
β”‚ 
β”œ qc                              #  QC logs
β”‚ β”œ *IDR_final.qc                 #   Final IDR QC
β”‚ β”œ rep1                          #   for true replicate 1
β”‚ β”‚ β”œ *.align.log                 #    Bowtie2 mapping stat log
β”‚ β”‚ β”œ *.dup.qc                    #    Picard (or sambamba) MarkDuplicate QC log
β”‚ β”‚ β”œ *.pbc.qc                    #    PBC QC
β”‚ β”‚ β”œ *.nodup.flagstat.qc         #    Flagstat QC for filtered bam
β”‚ β”‚ β”œ *M.cc.qc                    #    Cross-correlation analysis score for tagAlign
β”‚ β”‚ β”œ *M.cc.plot.pdf/png          #    Cross-correlation analysis plot for tagAlign
β”‚ β”‚ β”” *_qc.html/txt               #    ATAQC report
β”‚ ...
β”‚
β”œ signal                          #  signal tracks
β”‚ β”œ macs2                         #   signal tracks generated by MACS2
β”‚ β”‚ β”œ rep1                        #    for true replicate 1 
β”‚ β”‚ β”‚ β”œ *.pval.signal.bigwig (E)  #     signal track for p-val
β”‚ β”‚ β”‚ β”” *.fc.signal.bigwig   (E)  #     signal track for fold change
β”‚ ...
β”‚ β”” pooled_rep                    #   for pooled replicate
β”‚ 
β”œ report                          # files for HTML report
β”” meta                            # text files containing md5sum of output files and other metadata

ENCODE accession guideline

For each pipeline rune, ENCODE_summary.json file is generated under the output directory (-out_dir) for ENCODE accession (uploading pipeline outputs to the ENCODE portal). This JSON file includes all metadata and QC metrics required for ENCODE accession.

For ENCODE3, Please make sure that you run pipelines with -ENCODE3 flag.

Parameters required for ENCODE accesssion:

# required
        -ENCODE_accession <string>       : ENCODE experiment accession ID (or dataset).
        -ENCODE_award <string>           : ENCODE award (e.g. /awards/U41HG007000/).
        -ENCODE_lab <string>             : Lab (e.g. /labs/anshul-kundaje/)
        -ENCODE_assembly <string>        : hg19, GRCh38, mm9, mm10.
        -ENCODE_alias_prefix <string>    : Alias = Alias_prefix + filename
# optional
        -ENCODE_award_rfa <string>       : ENCODE award RFA (e.g. ENCODE3).
        -ENCODE_assay_category <string>  : ENCODE assay category.
        -ENCODE_assay_title <string>     : ENCODE assay title.

We also provide an ENCODE fastq downloader. It downloads fastqs matching award_rfa, assay_category and assay_title, and then automatically generate a shell script to run multiple pipelines. Such shell script also includes these ENCODE accession parameter set.

ENCODE accession spreadsheet (CSV) generation

./utils/parse_summary_ENCODE_accession_recursively.py recursively finds ENCODE_summary.json files and parse them to generate one big CSV spreadsheet for ENCODE accession.

$ python ./utils/parse_summary_ENCODE_accession_recursively.py -h

usage: ENCODE_summary.json parser for ENCODE accession [-h]
                                                       [--out-file OUT_FILE]
                                                       [--search-dir SEARCH_DIR]
                                                       [--json-file JSON_FILE]
                                                       [--sort-by-genome-and-exp]
                                                       [--ignored-accession-ids-file IGNORED_ACCESSION_IDS_FILE]

Recursively find ENCODE_summary.json, parse it and make a CSV for uploading to
the ENCODE portal. Use https://github.com/ENCODE-DCC/pyencoded-
tools/blob/master/ENCODE_submit_files.py for uploading.

optional arguments:
  -h, --help            show this help message and exit
  --out-file OUT_FILE   Output CSV filename)
  --search-dir SEARCH_DIR
                        Root directory to search for ENCODE_summary.json
  --json-file JSON_FILE
                        Specify json file name to be parsed
  --sort-by-genome-and-exp
                        Sort rows by genomes and ENCODE experiment accession
                        ID
  --ignored-accession-ids-file IGNORED_ACCESSION_IDS_FILE
                        Accession IDs in this text file will be ignored. (1
                        acc. ID per line)

QC metrics spreadsheet (TSV) generation

./utils/parse_summary_qc_recursively.py recursively finds ENCODE_summary.json files and parse them to generate one big TSV spreadsheet for QC metrics.

$ python parse_summary_qc_recursively.py -h
usage: ENCODE_summary.json parser for QC [-h] [--out-file OUT_FILE]
                                         [--search-dir SEARCH_DIR]
                                         [--json-file JSON_FILE]

Recursively find ENCODE_summary.json, parse it and make a TSV spreadsheet of
QC metrics.

optional arguments:
  -h, --help            show this help message and exit
  --out-file OUT_FILE   Output TSV filename)
  --search-dir SEARCH_DIR
                        Root directory to search for ENCODE_summary.json
  --json-file JSON_FILE
                        Specify json file name to be parsed

Programming with BDS

Requirements

Troubleshooting

See more troubleshootings

samtools ncurses bug

Prepend a directory for libncurses.so.5 to LD_LIBRARY_PATH. See install_dependencies.sh for solution.

samtools: symbol lookup error: /lib/x86_64-linux-gnu/libncurses.so.5: undefined symbol: _nc_putchar

Error: could not find environment: bds_atac

Unload any Anaconda Python modules. Remove locally installed Anaconda Python from your $PATH.

Error: could not find environment: bds_atac

Unload any Anaconda Python modules. Remove locally installed Anaconda Python from your $PATH.

Alternate Cloud-based Implementations

Error: Java disk space error: Disk quota exceeded

This error happens when ${TMPDIR} or /tmp is full so Java cannot write temporary files on it. You can specify Java temporary directory with the following paramter.

-java_tmp_dir [PATH]

Contributors

  • Jin wook Lee - PhD Student, Mechanical Engineering Dept., Stanford University
  • Chuan Sheng Foo - PhD Student, Computer Science Dept., Stanford University
  • Daniel Kim - MD/PhD Student, Biomedical Informatics Program, Stanford University
  • Nathan Boley - Postdoc, Dept. of Genetics, Stanford University
  • Anshul Kundaje - Assistant Professor, Dept. of Genetics, Stanford University

We'd also like to acknowledge Jason Buenrostro, Alicia Schep and William Greenleaf who contributed prototype code for some parts of the ATAC-seq pipeline.

More Repositories

1

deeplift

Public facing deeplift repo
Python
819
star
2

dragonn

A toolkit to learn how to model and interpret regulatory sequence data using deep learning.
Jupyter Notebook
254
star
3

bpnet

Toolkit to train base-resolution deep neural networks on functional genomics data and to interpret them
Jupyter Notebook
141
star
4

tfmodisco

TF MOtif Discovery from Importance SCOres
Jupyter Notebook
122
star
5

chrombpnet

Bias factorized, base-resolution deep learning models of chromatin accessibility (chromBPNet)
Jupyter Notebook
117
star
6

chipseq_pipeline

AQUAS TF and histone ChIP-seq pipeline
Java
105
star
7

dfim

Deep Feature Interaction Maps (DFIM)
Python
52
star
8

phantompeakqualtools

This package computes informative enrichment and quality measures for ChIP-seq/DNase-seq/FAIRE-seq/MNase-seq data. It can also be used to obtain robust estimates of the predominant fragment length or characteristic tag shift values in these assays.
R
52
star
9

ChromDragoNN

Code for the paper "Integrating regulatory DNA sequence and gene expression to predict genome-wide chromatin accessibility across cellular contexts"
Jupyter Notebook
44
star
10

3DChromatin_ReplicateQC

Software to compute reproducibility and quality scores for Hi-C data
Python
43
star
11

alzheimers_parkinsons

Collaboration with Montine, Chang, and Montgomery labs on Alzheimers / Parkinson's ATAC-seq analysis
Jupyter Notebook
43
star
12

genomelake

Simple and efficient access to genomic data for deep learning models.
Python
43
star
13

simdna

A python library for creating simulated regulatory DNA sequences
Python
38
star
14

abstention

Algorithms for abstention, calibration and domain adaptation to label shift.
Python
36
star
15

coda

Coda: a convolutional denoising algorithm for genome-wide ChIP-seq data
Python
33
star
16

cs273b

CS273B Deep Learning for Genomics Course Materials
Jupyter Notebook
32
star
17

ENCODE_downloader

Downloader for ENCODE
Python
31
star
18

coessentiality

Companion to "A genome-wide almanac of co-essential modules assigns function to uncharacterized genes" (https://doi.org/10.1101/827071)
Python
27
star
19

genomedisco

Software for comparing contact maps from HiC, CaptureC and other 3D genome data.
Jupyter Notebook
25
star
20

training_camp

Genetics training camp
Jupyter Notebook
21
star
21

gkmexplain

Accompanying repository for GkmExplain paper
Jupyter Notebook
21
star
22

fastISM

In-silico Saturation Mutagenesis implementation with 10x or more speedup for certain architectures.
Jupyter Notebook
19
star
23

ataqc

Python
17
star
24

basepairmodels

Python
16
star
25

labelshiftexperiments

Label shift experiments
Jupyter Notebook
15
star
26

seqdataloader

Sequence data label generation and ingestion into deep learning models
Python
12
star
27

bpnet-manuscript

BPNet manuscript code.
Jupyter Notebook
11
star
28

higlass-dynseq

Dynamic sequence track for HiGlass
JavaScript
11
star
29

DeepBindToKeras

Convert DeepBind models to Keras
C
11
star
30

ProCapNet

Repository for modeling PRO-cap data with the BPNet-like model, ProCapNet.
Jupyter Notebook
11
star
31

MPRA-DragoNN

Code accompanying the paper "Deciphering regulatory DNA sequences and noncoding genetic variants using neural network models of massively parallel reporter assays"
Python
11
star
32

mpra

Deep learning MPRAs
Jupyter Notebook
9
star
33

ENCODE_scatac

Python
8
star
34

variant-scorer

A framework to score and analyze variant effects genome-wide using ChromBPNet models
Python
8
star
35

tronn

Transcriptional Regulation (Optimized) Neural Nets (TRoNN)
Python
8
star
36

Cardiogenesis_Repo

Cardiogenesis Repo
Jupyter Notebook
8
star
37

deepbind

I have put my modified version of the deepbind code here.
C
8
star
38

scATAC-reprog

Code for the analysis performed in the paper "Transcription factor stoichiometry, motif affinity and syntax regulate single-cell chromatin dynamics during fibroblast reprogramming to pluripotency" by Nair, Ameen et al.
Jupyter Notebook
7
star
39

veryolddontuse_deeplift_modisco_tutorial

Jupyter Notebook
6
star
40

chromovar3d

Code from the chromatin variation 3d project
JavaScript
6
star
41

chip-nexus-pipeline

ChIP-nexus pipeline
Python
6
star
42

kerasAC

keras accessibility models: code to train, predict, interpret
Python
6
star
43

DART-Eval

Jupyter Notebook
6
star
44

DMSO

Jupyter Notebook
5
star
45

mesoderm

Scripts for dataset processing and QC for the mesoderm differentiation project.
HTML
5
star
46

1kg_ld_utils

utils/notes for LD calculation from 1000 genomes panel
Shell
5
star
47

keras-genomics

Genomics layers for Keras 2
Python
5
star
48

lsgkm-svr

lsgkm+gkmexplain with regression functionality. Builds off kundajelab/lsgkm (which has gkmexplain), which in turn builds off Dongwon-Lee/lsgkm (the original lsgkm repo)
C
5
star
49

yuzu

yuzu is a compressed-sensing based approach for quickly calculating in-silico mutagenesis saliency.
Python
5
star
50

PFBoost

modular 2D boosting code with stabilization and hierarchies
Python
4
star
51

coessentiality-browser

Gene browser using coessentiality and related data
Python
4
star
52

zenodo_upload

Python script to upload files to Zenodo
Python
4
star
53

neural_motif_discovery

Framework for interrogating transcription-factor motifs and their syntax/grammars from neural-network interpretations
Jupyter Notebook
4
star
54

vizsequence

Collecting commonly-repeated sequence visualization code here
Python
4
star
55

av_scripts

A place to track my scripts with git.
Jupyter Notebook
4
star
56

bpnet-refactor

Python
3
star
57

dynseq-pages

3
star
58

locusselect

extraction of data embeddings from deep learning model layers; computation of embedding distance and visualization with umap/tsne
Jupyter Notebook
2
star
59

atlas_resources

A nucleotide-resolution, context-specific sequence annotation of the dynamic regulatory landscape of the human and mouse genomes
Shell
2
star
60

tf_binding_challenge

scoring/ranking code for tf binding challenge
Python
2
star
61

bulk-rna-seq

Pipeline for gecco RNA-seq analysis
Shell
2
star
62

python_reading_group

Jupyter Notebook
2
star
63

higlass-multi-tileset

Multi-tileset data fetcher for HiGlass
JavaScript
2
star
64

retina-models

BPNet models for retina single-cell multiome data
Jupyter Notebook
2
star
65

TF-Atlas

Code repository for the TF-Atlas project
Jupyter Notebook
1
star
66

crispr_safe_targeting_regions

Repository for creating the CRISPR controls termed "safe harbor" regions from Morgens et al., 2017, Nat Comms.
Shell
1
star
67

interpret-benchmark

Benchmarking interpretation methods
Jupyter Notebook
1
star
68

kCCA

Python
1
star
69

momma_dragonn

Flexible deep learning framework
Jupyter Notebook
1
star
70

feature_interactions

Jupyter Notebook
1
star
71

bds_pipeline_modules

BigDataScript (BDS) pipelines and modules
Shell
1
star
72

mseqgen

Multi task batch generator for training deep learning models on CHIP-seq, CHIP-exo, CHIP-nexus, ATAC-seq, RNA-seq (or any other -seq)
Python
1
star
73

SVM_pipelines

Jupyter Notebook
1
star
74

jamboree-toolkit

toolkit for setting up compute environment on gcp for jamborees
Shell
1
star
75

genomics-DL-archsandlosses

A collection of Deep Learning architectures and loss functions from across the genomics literature
Python
1
star
76

affinity_distillation

Jupyter Notebook
1
star
77

SeqPriorizationCATLAS

Sequence priorization using gkm-explain.
Jupyter Notebook
1
star
78

chromBPNet-tutorial

How to train BPNets on ATAC-seq data using the Basepairmodels repo from Stanford's Kundaje Lab.
Shell
1
star
79

PREUSS

PREUSS: predicting RNA editing using sequence and structure
Jupyter Notebook
1
star
80

CTCFMutants

Jupyter Notebook
1
star