• This repository has been archived on 25/Oct/2021
  • Stars
    star
    113
  • Rank 310,115 (Top 7 %)
  • Language
    Python
  • License
    Other
  • Created over 5 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Pipeline for calling structural variations in whole genomes sequencing Oxford Nanopore data

ONT_logo

We have a new bioinformatic resource that largely replaces the functionality of this project! See our new repository here: https://github.com/epi2me-labs/wf-human-sv

This repository is now unsupported and we do not recommend its use. Please contact Oxford Nanopore: [email protected] for help with your application if it is not possible to upgrade to our new resources, or we are missing key features.


pipeline-structural-variation

Overview

pipeline-structural-variation is a pipeline for calling structural variations on whole genome sequencing data obtained from Oxford Nanopore sequencing platforms. It accepts FASTQ files and outputs aligned reads and filtered SV calls.

Features

The pipeline performs the following steps:

  • Maps reads using lra
  • Produces QC report using NanoPlot [optional]
  • Estimates appropriate parameters for variant calling depending on read depth
  • Calls variants using cuteSV
  • Filters variants by minimum/maximum length, read support, or type (e.g. insertion, deletion, etc.)

Getting Started

Requirements

The following software packages must be installed prior to running:

Installation

After installing miniconda3 (see above), install the pipeline as follows:

# Get pipeline
$ wget -O pipeline-structural-variation.tar.gz https://github.com/nanoporetech/pipeline-structural-variation/archive/v2.0.2.tar.gz
$ tar xvzf pipeline-structural-variation.tar.gz
# Change to directory
$ cd pipeline-structural-variation*
# Create conda environment with all dependencies
$ conda env create -n pipeline-structural-variation-v2 -f env.yml
# Activate environment
$ conda activate pipeline-structural-variation-v2
# To test if the installation was successful run
$ snakemake -p -j 1 --configfile config.yml
# Deactivate environment
$ conda deactivate

Input

To run the pipeline the following input files are required:

Input Description
Reference genome FASTA file containing the reference genome (e.g. GRCh38 for human)
Nanopore reads Folder containing FASTQ files or a single concatenated FASTQ file. Reads should be q-score filtered (see FAQ)

Output

The main output files created by the pipeline are:

Output Description
Aligned reads Aligned reads in indexed and sorted BAM format
Variant calls Called variants in VCF format

After the pipeline has finished you can find the aligned reads in {sample_name}/alignment/ and the indexed and zipped VCF file in {sample_name}/sv_calls/{sample_name}_cutesv_filtered.vcf.gz, where {sample_name} is derived from the snakemake configuration (see below).

Usage:

To run the pipeline with default settings invoke snakemake as follows.

$ snakemake all --config input_fastq=/data/pass/ reference_fasta=/data/ref/hg38.fa threads=30

all is the default target (see Targets); this will run all steps required for SV calling and produce a QC report for the input reads using NanoPlot. input_fastq specifies the input FASTQ files or a folder containing multiple input FASTQ files (e.g. the pass folder from MinKNOW).

Targets

Name Description
qc Only maps reads and produces a QC report
call Maps reads and calls variants but does not produce a QC report
eval Evaluates the called variants against the GIAB truth set only applicable when sequencing HG002

Options

The pipeline accepts several input parameters. They can either be changed in the config.yml (see below) file or specified when running snakemake.

For example:

snakemake eval --config input_fastq=/data/pass/ reference_fasta=/data/ref/hg38.fa min_sv_length=100 threads=30

In the above example the minimum SV length parameter was changed to 100 bp.

Required parameters

These parameters have to be specified to run the pipeline.

Parameter Allowed Description
input_fastq Absolute file path FASTQ file or folder containing FASTQ files
reference_fasta Absolute file path FASTA file containing the reference genome
Optional parameters
Parameter Allowed Default Description
sample_name string without spaces my_sample Name of the sample
min_sv_length Integer > 30 30 Minimum SV length
max_sv_length Integer 100000 Maximum SV length
min_read_length Integer 1000 Min read length. Shorter reads will be ignored
min_read_mapping_quality Integer 20 Min mapping quality. Reads with lower mapping quality will be discarded
min_read_support Integer 'auto' Minimum read support required to call a SV (auto for auto-detect)

Annotating and visualising the results

Annotate

This step will become part of the pipeline in the future. Currently, use tools like bedtools or vcfanno to annotate the VCF file with information about genes, repeats, known SV etc. from GTF/GFF/BED files.

There are many ways to retrieve annotations. A few examples are:

Visualise

Integrated genome viewer

Download from here.

Supported formats: BAM, BED, VCF, WIG, …

Recommended settings for SV analysis:

View -> Preferences -> Alignments

  • Quick consensus mode on
  • Hide indels < 10 bp
  • Label indels > 30 bp
  • Mapping quality threshold: 20

Works well for:

  • Deletions and insertions
  • Inspect reference sequence and flanking regions
  • Compare to annotations
  • Inspect heterozygous variants

Ribbon

Find here. Supported format: BAM, BED, VCF

Recommended settings for SV analysis:

  • Multi-read settings -> minimum number of alignments -> 4-5
  • Single-read settings -> Dot plot

Works well for:

  • Visualizing split alignments
  • Inversions, translocations, duplications

Results

We benchmarked the pipeline against the preliminary Genome in a bottle SV truth set for HG002.

Precision and Recall

The pipeline was run using auto for determining the most suitable cuteSV parameters to get a good balance between precision and recall. Depending on your application you might want to change the min_read_support parameter to maximize either precision or recall.

Dataset Pipeline Min. read support Coverage Precision Recall Precision gt Recall gt
HG002 q7 filtered v2.0.0 auto 60* 0.955 0.979 0.945 0.979
HG002 q7 filtered v2.0.0 auto 45* 0.956 0.977 0.942 0.977
HG002 q7 filtered v2.0.0 auto 30* 0.953 0.971 0.934 0.970
HG002 q7 filtered v2.0.0 auto 20* 0.954 0.959 0.923 0.957
HG002 q7 filtered v2.0.0 auto 15* 0.952 0.938 0.904 0.935

* Coverage was computed using mosdepth from the BAM file. The BAM file was neither filtered by mapping quality nor read length.

Reproducing the results

Run the pipeline with HG002 data using the eval target. In the output folder you will find eval/summary.txt containing precision and recall values.


Help

Licence and Copyright

(c) 2020 Oxford Nanopore Technologies Ltd.

This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.

FAQs

I only want my results! What should I do? Install pipeline (see Installation) and run as follows.

$ snakemake all --config input_fastq=/data/pass/ reference_fasta=/data/ref/hg38.fa threads=16

You will find your results in the following locations:

Aligned reads:

sv_sample01/alignment/my_sample_lra.bam

Variant calls:

sv_sample01/sv_calls/my_sample_cutesv_filtered.vcf.gz

What kind of structural variants are supported?

Currently the pipeline has been validated to detect insertions, deletions and duplications (currently reported as insertions as in the GIAB truthset) from whole genome sequencing data. Support for inversions and translocations will be added in the future.

What is the minimum SV length supported?

Currently, 30 bp.

How can i filter my reads by q-score?

The most recent version of MinKNOW will perform filtering automatically. When using the FASTQ files from the pass output folder no additional filtering is required. For unfiltered datasets use NanoFilt with a minimum q-score of 6.

How long will it take to run the pipeline for a 30X human dataset?

When running with 30 CPU cores this will take roughly 3-4 hours.

How much memory will I need?

Memory consumption is determined by lra. Therefore, 16 GB will be required for human datasets. For smaller genomes 8 GB will be sufficient.

How much storage will I need?

Unzipped FASTQ files for a human 30X human dataset will require roughly 200 GB. In addition 150 GB will be required for the mapped BAM files.

Are FAST5 files required to run the pipeline?

No.

Can I use the pipeline to detect variants in a cancer dataset or to detect somatic variants?

The pipeline has not yet been validated on cancer samples or for somatic variant detection.

Can I use the pipeline to detect SVs for targeted sequencing experiments (e.g. long amplicons, etc.)?

Calling variants in targeted regions is supported but requires the user to specify a BED file containing the coordinates of the targeted regions.

For example, save the following in a file called targets.bed

chr1 6579767 6589767

and run pipeline as follows:

snakemake eval --config input_fastq=/data/pass/ reference_fasta=/data/ref/hg38.fa target_bed=targets.bed threads=30

Make sure that the chromosome names in the BED file match the names in your reference FASTA files (e.g. chr1 vs. 1).

Can I use this pipeline to detect gene fusions using DNA data?

The current version does not support calling gene fusions from DNA data as translocation calling is not supported yet. However, support for translocations will be available in the future.

Can I use this pipeline to detect gene fusion using cDNA data?

cDNA data is not supported.

Abbreviations and glossary

Term/Abbreviation Description
SV Structural variation
WGS Whole genome sequencing
Target Targets or rules are used by snakemake to define what steps of the pipeline should be executed. Changing the target can modify the behaviour of the pipeline to fit certain applications better
Precision Fraction of SV calls that are present in the truth set. Computed as TP / (TP + FP).
Recall Fraction of calls from the truth set that were called correctly. Computed as TP / (TP + FN).

References and Supporting Information

If you use this pipeline please cite:

When using the QC report please also cite:

  • De Coster, W., D’Hert, S., Schultz, D. T., Cruts, M., & Van Broeckhoven, C. (2018). NanoPack: visualizing and processing long-read sequencing data. Bioinformatics, 34(15), 2666–2669. https://doi.org/10.1093/bioinformatics/bty149

For additional information on SV and SV calling please see:

Research Release

Research releases are provided as technology demonstrators to provide early access to features or stimulate Community development of tools. Support for this software will be minimal and is only provided directly by the developers. Feature requests, improvements, and discussions are welcome and can be implemented by forking and pull requests. However much as we would like to rectify every issue and piece of feedback users may have, the developers may have limited resource for support of this software. Research releases may be unstable and subject to rapid iteration by Oxford Nanopore Technologies.

More Repositories

1

dorado

Oxford Nanopore's Basecaller
C++
493
star
2

medaka

Sequence correction provided by ONT Research
Python
411
star
3

bonito

A PyTorch Basecaller for Oxford Nanopore Reads
Python
392
star
4

tombo

Tombo is a suite of tools primarily for the identification of modified nucleotides from raw nanopore sequencing data.
Python
230
star
5

megalodon

Megalodon is a research command line tool to extract high accuracy modified base and sequence variant calls from raw nanopore reads by anchoring the information rich basecalling neural network output to a reference genome/transriptome.
Python
197
star
6

fast-ctc-decode

Blitzing Fast CTC Beam Search Decoder
Rust
176
star
7

remora

Methylation/modified base calling separated from basecalling.
Python
156
star
8

ont_fast5_api

Oxford Nanopore Technologies fast5 API software
Python
146
star
9

modkit

A bioinformatics tool for working with modified bases
Rust
137
star
10

pod5-file-format

Pod5: a high performance file format for nanopore reads.
C++
131
star
11

taiyaki

Training models for basecalling Oxford Nanopore reads
Python
114
star
12

pipeline-transcriptome-de

Pipeline for differential gene expression (DGE) and differential transcript usage (DTU) analysis using long reads
Python
106
star
13

read_until_api

Read Until client library for Nanopore Sequencing
Python
102
star
14

rerio

Research release basecalling models and configurations
Python
102
star
15

flappie

Flip-flop basecaller for Oxford Nanopore reads
C
98
star
16

pomoxis

Analysis components from Oxford Nanopore Research
Python
94
star
17

scrappie

Scrappie is a technology demonstrator for the Oxford Nanopore Research Algorithms group
C
91
star
18

ont-assembly-polish

ONT assembly and Illumina polishing pipeline
Makefile
91
star
19

pychopper

A tool to identify, orient, trim and rescue full length cDNA reads
Python
79
star
20

qcat

qcat is a Python command-line tool for demultiplexing Oxford Nanopore reads from FASTQ files.
Python
77
star
21

jmespath-ts

Typescript translation of the jmespath.js package
TypeScript
63
star
22

wub

Tools and software library developed by the ONT Applications group
Python
61
star
23

minknow_api

Protobuf and gRPC specifications for the MinKNOW API
Python
55
star
24

pore-c

Pore-C support
Python
53
star
25

kmer_models

Predictive kmer models for development use
53
star
26

katuali

Analysis pipelines from Oxford Nanopore Technologies' Research Division
Python
50
star
27

duplex-tools

Splitting of sequence reads by internal adapter sequence search
Python
49
star
28

pinfish

Tools to annotate genomes using long read transcriptomics data
Go
45
star
29

sockeye

Single Cell Transcriptomics
Python
40
star
30

vbz_compression

VBZ compression plugin for nanopore signal data
C++
38
star
31

pipeline-nanopore-ref-isoforms

Pipeline for annotating genomes using long read transcriptomics data with stringtie and other tools
Python
36
star
32

Pore-C-Snakemake

Python
33
star
33

bwapy

Python bindings to bwa mem
Python
31
star
34

ont_tutorial_basicqc

A bioinformatics tutorial demonstrating a best-practice workflow to review a flowcell's sequence_summary.txt
TeX
30
star
35

pyguppyclient

Python client library for Guppy
Python
30
star
36

pipeline-umi-amplicon

Workflow to prepare high accuracy single molecule consensus sequences from amplicon data using unique molecular identifiers
Python
28
star
37

pipeline-pinfish-analysis

Pipeline for annotating genomes using long read transcriptomics data with pinfish
Python
27
star
38

pipeline-nanopore-denovo-isoforms

Pipeline for de novo clustering of long transcriptomic reads
Python
26
star
39

sloika

Sloika is Oxford Nanopore Technologies' software for training neural network models for base calling
Python
25
star
40

fast5_research

Fast5 API provided by ONT Research
Python
21
star
41

pyspoa

Python bindings to spoa
Python
18
star
42

DTR-phage-pipeline

Python
16
star
43

minimappy

Python bindings to minimap2
Python
16
star
44

isONclust2

A tool for de novo clustering of long transcriptomic reads
C++
14
star
45

jmespath-plus

JMESPath with extended collection of built-in functions
TypeScript
14
star
46

minknow_lims_interface

Protobuff and gRPC specifications for the MinKNOW LIMS Interface
13
star
47

fast5mod

Extract modifed base call information from Guppy Fast5 files.
Python
13
star
48

ont_h5_validator

Python
12
star
49

dRNA-paper-scripts

Direct RNA publication scripts
Python
11
star
50

currennt

Modified fork of CURRENNT https://sourceforge.net/projects/currennt/
C++
11
star
51

pipeline-polya-diff

Pipeline for testing shifts in poly(A) tail lengths estimated by nanopolish
Python
9
star
52

ont-open-datasets

Website describing data releases, and providing additional resources.
HTML
9
star
53

pipeline-polya-ng

Pipeline for calling poly(A) tail lengths from nanopore direct RNA data using nanopolish
Python
9
star
54

ts-runtime-typecheck

A collection of common types for TypeScript along with dynamic type cast methods.
TypeScript
9
star
55

epi2me-api

API for communicating with the EPI2ME Platform for nanopore data analysis. Used by EPI2ME Agent & CLI.
TypeScript
9
star
56

cronkite

One **hell** of a reporter
TypeScript
8
star
57

mako

Analyte identification via squiggles.
Python
7
star
58

marine-phage-paper-scripts

Python
6
star
59

homebrew-tap

Homebrew casks for applications from Oxford Nanopore Technologies PLC and Metrichor Ltd.
Ruby
6
star
60

barcoding

Naïve barcode deconvolution for amplicons
Perl
6
star
61

ont-minimap2

Cross platform builds for minimap2
CMake
5
star
62

plasmid-map

Plasmid map visualisations for Metrichor reports
TypeScript
5
star
63

spliced_bam2gff

Go
5
star
64

hammerpede

A package for training strand-specific profile HMMs for primer sets from real Nanopore data
Python
5
star
65

hatch-protobuf

Hatch plugin for generating Python files from Protocol Buffers .proto files
Python
4
star
66

fastq-filter

Quality and length filter for FastQ data
Python
4
star
67

bripy

Bam Read Index for python
C
3
star
68

pipeline-pychopper

Utility pipeline for running pychopper, a tool to identify full length cDNA reads
Python
3
star
69

lamprey

GUI for desktop basecalling
JavaScript
3
star
70

panga

Python
2
star
71

data-rambler

An experimental language for a JSON query, transformation and streaming
TypeScript
2
star
72

getopt-win32

C
2
star
73

ts-argue

TypeScript
1
star
74

onesie

A Linux device-driver for the MinION-mk1C
C
1
star
75

vbz-h5py-plugin

Python
1
star
76

fs-inspect

node.js library for indexing the contents of a folder
TypeScript
1
star