• Stars
    star
    156
  • Rank 239,589 (Top 5 %)
  • Language
    Python
  • License
    Other
  • Created about 3 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Methylation/modified base calling separated from basecalling.
[Oxford Nanopore Technologies]

Remora

Remora models predict methylation/modified base status separated from basecalling. The Remora repository is focused on the preparation of modified base training data and training modified base models. Some functionality for running Remora models and investigation of raw signal is also provided. For production modified base calling use Dorado. For recommended modified base downstream processing use modkit. For more advanced modified base data preparation from "randomers" see the Betta release community note and reach out to customer support to inquire about access ([email protected]).

Installation

Install from pypi:

pip install ont-remora

Install from github source for development:

git clone [email protected]:nanoporetech/remora.git
pip install -e remora/[tests]

It is recommended that Remora be installed in a virtual environment. For example python3 -m venv venv; source venv/bin/activate.

See help for any Remora sub-command with the -h flag.

Getting Started

Remora models predict modified bases anchored to canonical basecalls or reference sequence from a nanopore read.

The Remora training/prediction input unit (referred to as a chunk) consists of:

  1. Section of normalized signal
  2. Canonical bases attributed to the section of signal
  3. Mapping between these two

Chunks have a fixed signal length defined at data preparation/model training time. These values are saved with the Remora model to extract chunks in the same manner at inference. A fixed position within the chunk is defined as the "focus position" around which the fixed signal chunk is extracted. By default, this position is the center of the "focus base" being interrogated by the model.

The canonical bases and mapping to signal (a.k.a. "move table") are combined for input into the neural network in several steps. First each base is expanded to the k-mer surrounding that base (as defined by the --kmer-context-bases hyper-parameter). Then each k-mer is expanded according to the move table. Finally each k-mer is one-hot encoded for input into the neural network. This procedure is depicted in the figure below.

Neural network sequence encoding

Data Preparation

Remora data preparation begins from a POD5 file containing signal data and a BAM file containing basecalls from the POD5 file. Note that the BAM file must contain the move table (--emit-moves in Dorado) and the MD tag (default in Dorado with mapping and --MD argument for minimap2). If using minimap2 for alignment use samtools fastq -T "*" [in.bam] | minimap2 -y -ax map-ont [ref.fa] - | samtools view -b -o [out.bam] in order to transfer the move table tags through the alignment step since minimap2 does not support SAM/BAM input.

The following example generates training data from canonical (PCR) and modified (M.SssI treatment) samples in the same fashion as the released 5mC CG-context models. Example reads can be found in the Remora repository (see test/data/ directory).

K-mer tables for applicable conditions can be found in the kmer_models repository.

remora \
  dataset prepare \
  can_reads.pod5 \
  can_mappings.bam \
  --output-path can_chunks \
  --refine-kmer-level-table levels.txt \
  --refine-rough-rescale \
  --motif CG 0 \
  --mod-base-control
remora \
  dataset prepare \
  mod_reads.pod5 \
  mod_mappings.bam \
  --output-path mod_chunks \
  --refine-kmer-level-table levels.txt \
  --refine-rough-rescale \
  --motif CG 0 \
  --mod-base m 5mC

The above commands each produce a core Remora dataset stored in the directory defined by --output-path. Core datasets contain memory mapped numpy files for each core array (chunk data) and a JSON format metadata config file. These memory mapped files allow efficient access to very large datasets.

Before Remora, 3.0 datasets were stored as numpy array dictionaries. Updating datasets can be accomplished with the scripts/update_dataset.py script included in the repository.

Composing Datasets

Core datasets (or other composed datasets) can be composed to produce a new dataset. The remora dataset make_config command creates these config files specifying the composition of the new dataset. When reading batches from these combined datasets, the default behavior will be to draw chunks randomly from the entire set of chunks. This setting is useful for multiple flowcells of the same condition.

The --dataset-weights argument produces a config which generates batches with a fixed proportion of chunks from each input dataset. This setting is useful when combining different data types, for example control and modified datasets.

In addition, the remora dataset merge command is supplied to merge datasets, copying the data into a new core Remora dataset. This may increase efficiency of data access for datasets composed of many core datasets, but only supports the default behavior from the make_config command.

Composed dataset config files can also be specified manually. Config files are JSON format files containing a single list, where each element is a list of two items. The first is the path to the dataset and the second is the weight (must be a positive value). The make_config output config file will also contain the dataset hash to ensure the contents of a dataset are unchanged, but this is an optional third field in the config.

Metadata attributes from each core dataset are checked for compatibility and merged where applicable. Chunk raw data are loaded from each core dataset at specified proportions to construct batches at loading time. In a break from Remora <3.0, datasets allow "infinite iteration", where each core dataset is drawn from indefinitely and independently to supply training chunks. For validation from a fixed set of chunks, finite iteration is also supported.

To generate a dataset config from the datasets created above one can use the following command.

remora \
  dataset make_config \
  train_dataset.jsn \
  can_chunks \
  mod_chunks \
  --dataset-weights 1 1 \
  --log-filename train_dataset.log

Model Training

Models are trained with the remora model train command. For example a model can be trained with the following command.

remora \
  model train \
  train_dataset.jsn \
  --model remora/models/ConvLSTM_w_ref.py \
  --device 0 \
  --chunk-context 50 50 \
  --output-path train_results

This command will produce a "best" model in torchscript format for use in Bonito, remora infer, or remora validate commands. Models can be exported for use in Dorado with the remora model export train_results/model_best.pt command.

Model Inference

For testing purposes, inference within Remora is provided. For large scale using the exported Dorado model during basecalling is recommended.

remora \
  infer from_pod5_and_bam \
  can_signal.pod5 \
  can_mappings.bam \
  --model train_results/model_best.pt \
  --out-file can_infer.bam \
  --log-filename can_infer.log \
  --device 0
remora \
  infer from_pod5_and_bam \
  mod_signal.pod5 \
  mod_mappings.bam \
  --model train_results/model_best.pt \
  --out-file mod_infer.bam \
  --log-filename mod_infer.log \
  --device 0

Finally, Remora provides tools to validate these results. Ground truth BED files reference positions where each read should be called as the modified or canonical base listed in the BED name field. Note in the test files where the control file has a C in the name field, while the modified BED file has m (single letter code for 5mC) in the name field.

WARNING: There is a bug in pysam which causes all-context (e.g. --motif C 0) modified model calls to produce invalid results with this command. This issue is reported here. We are investigating solutions to bypass this issue including dropping this command.

remora \
  validate from_modbams \
  --bam-and-bed can_infer.bam can_ground_truth.bed \
  --bam-and-bed mod_infer.bam mod_ground_truth.bed \
  --explicit-mod-tag-used

Pre-trained Models

See the selection of current released models with remora model list_pretrained. Pre-trained models are stored remotely and can be downloaded using the remora model download command or will be downloaded on demand when needed.

Models may be run from Bonito. See Bonito documentation to apply Remora models.

More advanced research models may be supplied via Rerio. These files require download from Rerio and then the path to this download must be provided to Remora. Note that older ONNX format models require Remora version < 2.0.

Python API and Raw Signal Analysis

Raw signal plotting is available via the remora analyze plot ref_region command.

The plot ref_region command is useful for gaining intuition into signal attributes and visualize signal shifts around modified bases. As an example using the test data, the following command produces the plots below. Note that only a single POD5 file per sample is allowed as input and that the BAM records must contain the mv and MD tags (see the see "Data Preparation" section above for details).

remora \
  analyze plot ref_region \
  --pod5-and-bam can_reads.pod5 can_mappings.bam \
  --pod5-and-bam mod_reads.pod5 mod_mappings.bam \
  --ref-regions ref_regions.bed \
  --highlight-ranges mod_gt.bed \
  --refine-kmer-level-table levels.txt \
  --refine-rough-rescale \
  --log-filename log.txt

Plot reference region image (forward strand)

Plot reference region image (reverse strand)

The Remora API to access, manipulate and visualize nanopore reads including signal, basecalls, and reference mapping is described in more detail in the notebooks section of this repository.

Terms and Licence

This is a research release provided under the terms of the Oxford Nanopore Technologies' Public Licence. Research releases are provided as technology demonstrators to provide early access to features or stimulate Community development of tools. Support for this software will be minimal and is only provided directly by the developers. Feature requests, improvements, and discussions are welcome and can be implemented by forking and pull requests. Much as we would like to rectify every issue, the developers may have limited resource for support of this software. Research releases may be unstable and subject to rapid change by Oxford Nanopore Technologies.

Β© 2021-2023 Oxford Nanopore Technologies Ltd. Remora is distributed under the terms of the Oxford Nanopore Technologies' Public Licence.

Research Release

Research releases are provided as technology demonstrators to provide early access to features or stimulate Community development of tools. Support for this software will be minimal and is only provided directly by the developers. Feature requests, improvements, and discussions are welcome and can be implemented by forking and pull requests. However much as we would like to rectify every issue and piece of feedback users may have, the developers may have limited resource for support of this software. Research releases may be unstable and subject to rapid iteration by Oxford Nanopore Technologies.

More Repositories

1

dorado

Oxford Nanopore's Basecaller
C++
493
star
2

medaka

Sequence correction provided by ONT Research
Python
411
star
3

bonito

A PyTorch Basecaller for Oxford Nanopore Reads
Python
392
star
4

tombo

Tombo is a suite of tools primarily for the identification of modified nucleotides from raw nanopore sequencing data.
Python
230
star
5

megalodon

Megalodon is a research command line tool to extract high accuracy modified base and sequence variant calls from raw nanopore reads by anchoring the information rich basecalling neural network output to a reference genome/transriptome.
Python
197
star
6

fast-ctc-decode

Blitzing Fast CTC Beam Search Decoder
Rust
176
star
7

ont_fast5_api

Oxford Nanopore Technologies fast5 API software
Python
146
star
8

modkit

A bioinformatics tool for working with modified bases
Rust
137
star
9

pod5-file-format

Pod5: a high performance file format for nanopore reads.
C++
131
star
10

taiyaki

Training models for basecalling Oxford Nanopore reads
Python
114
star
11

pipeline-structural-variation

Pipeline for calling structural variations in whole genomes sequencing Oxford Nanopore data
Python
113
star
12

pipeline-transcriptome-de

Pipeline for differential gene expression (DGE) and differential transcript usage (DTU) analysis using long reads
Python
106
star
13

read_until_api

Read Until client library for Nanopore Sequencing
Python
102
star
14

rerio

Research release basecalling models and configurations
Python
102
star
15

flappie

Flip-flop basecaller for Oxford Nanopore reads
C
98
star
16

pomoxis

Analysis components from Oxford Nanopore Research
Python
94
star
17

scrappie

Scrappie is a technology demonstrator for the Oxford Nanopore Research Algorithms group
C
91
star
18

ont-assembly-polish

ONT assembly and Illumina polishing pipeline
Makefile
91
star
19

pychopper

A tool to identify, orient, trim and rescue full length cDNA reads
Python
79
star
20

qcat

qcat is a Python command-line tool for demultiplexing Oxford Nanopore reads from FASTQ files.
Python
77
star
21

jmespath-ts

Typescript translation of the jmespath.js package
TypeScript
63
star
22

wub

Tools and software library developed by the ONT Applications group
Python
61
star
23

minknow_api

Protobuf and gRPC specifications for the MinKNOW API
Python
55
star
24

pore-c

Pore-C support
Python
53
star
25

kmer_models

Predictive kmer models for development use
53
star
26

katuali

Analysis pipelines from Oxford Nanopore Technologies' Research Division
Python
50
star
27

duplex-tools

Splitting of sequence reads by internal adapter sequence search
Python
49
star
28

pinfish

Tools to annotate genomes using long read transcriptomics data
Go
45
star
29

sockeye

Single Cell Transcriptomics
Python
40
star
30

vbz_compression

VBZ compression plugin for nanopore signal data
C++
38
star
31

pipeline-nanopore-ref-isoforms

Pipeline for annotating genomes using long read transcriptomics data with stringtie and other tools
Python
36
star
32

Pore-C-Snakemake

Python
33
star
33

bwapy

Python bindings to bwa mem
Python
31
star
34

ont_tutorial_basicqc

A bioinformatics tutorial demonstrating a best-practice workflow to review a flowcell's sequence_summary.txt
TeX
30
star
35

pyguppyclient

Python client library for Guppy
Python
30
star
36

pipeline-umi-amplicon

Workflow to prepare high accuracy single molecule consensus sequences from amplicon data using unique molecular identifiers
Python
28
star
37

pipeline-pinfish-analysis

Pipeline for annotating genomes using long read transcriptomics data with pinfish
Python
27
star
38

pipeline-nanopore-denovo-isoforms

Pipeline for de novo clustering of long transcriptomic reads
Python
26
star
39

sloika

Sloika is Oxford Nanopore Technologies' software for training neural network models for base calling
Python
25
star
40

fast5_research

Fast5 API provided by ONT Research
Python
21
star
41

pyspoa

Python bindings to spoa
Python
18
star
42

DTR-phage-pipeline

Python
16
star
43

minimappy

Python bindings to minimap2
Python
16
star
44

isONclust2

A tool for de novo clustering of long transcriptomic reads
C++
14
star
45

jmespath-plus

JMESPath with extended collection of built-in functions
TypeScript
14
star
46

minknow_lims_interface

Protobuff and gRPC specifications for the MinKNOW LIMS Interface
13
star
47

fast5mod

Extract modifed base call information from Guppy Fast5 files.
Python
13
star
48

ont_h5_validator

Python
12
star
49

dRNA-paper-scripts

Direct RNA publication scripts
Python
11
star
50

currennt

Modified fork of CURRENNT https://sourceforge.net/projects/currennt/
C++
11
star
51

pipeline-polya-diff

Pipeline for testing shifts in poly(A) tail lengths estimated by nanopolish
Python
9
star
52

ont-open-datasets

Website describing data releases, and providing additional resources.
HTML
9
star
53

pipeline-polya-ng

Pipeline for calling poly(A) tail lengths from nanopore direct RNA data using nanopolish
Python
9
star
54

ts-runtime-typecheck

A collection of common types for TypeScript along with dynamic type cast methods.
TypeScript
9
star
55

epi2me-api

API for communicating with the EPI2ME Platform for nanopore data analysis. Used by EPI2ME Agent & CLI.
TypeScript
9
star
56

cronkite

One **hell** of a reporter
TypeScript
8
star
57

mako

Analyte identification via squiggles.
Python
7
star
58

marine-phage-paper-scripts

Python
6
star
59

homebrew-tap

Homebrew casks for applications from Oxford Nanopore Technologies PLC and Metrichor Ltd.
Ruby
6
star
60

barcoding

NaΓ―ve barcode deconvolution for amplicons
Perl
6
star
61

ont-minimap2

Cross platform builds for minimap2
CMake
5
star
62

plasmid-map

Plasmid map visualisations for Metrichor reports
TypeScript
5
star
63

spliced_bam2gff

Go
5
star
64

hammerpede

A package for training strand-specific profile HMMs for primer sets from real Nanopore data
Python
5
star
65

hatch-protobuf

Hatch plugin for generating Python files from Protocol Buffers .proto files
Python
4
star
66

fastq-filter

Quality and length filter for FastQ data
Python
4
star
67

bripy

Bam Read Index for python
C
3
star
68

pipeline-pychopper

Utility pipeline for running pychopper, a tool to identify full length cDNA reads
Python
3
star
69

lamprey

GUI for desktop basecalling
JavaScript
3
star
70

panga

Python
2
star
71

data-rambler

An experimental language for a JSON query, transformation and streaming
TypeScript
2
star
72

getopt-win32

C
2
star
73

ts-argue

TypeScript
1
star
74

onesie

A Linux device-driver for the MinION-mk1C
C
1
star
75

vbz-h5py-plugin

Python
1
star
76

fs-inspect

node.js library for indexing the contents of a folder
TypeScript
1
star