• Stars
    star
    493
  • Rank 88,902 (Top 2 %)
  • Language
    C++
  • License
    BSD 3-Clause "New...
  • Created almost 4 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Real-time neural network inferencing

RTNeural

Tests Bench Examples codecov arXiv License

A lightweight neural network inferencing engine written in C++. This library was designed with the intention of being used in real-time systems, specifically real-time audio processing.

Currently supported layers:

  • Dense
  • GRU
  • LSTM
  • Conv1D
  • Conv2D
  • MaxPooling
  • BatchNorm1D
  • BatchNorm2D

Currently supported activations:

  • tanh
  • ReLU
  • Sigmoid
  • SoftMax
  • ELu
  • PReLU

For a complete reference of the available functionality, see the API docs. For more information on the design and purpose of the library, see the reference paper.

Citation

If you are using RTNeural as part of an academic work, please cite the library as follows:

@article{chowdhury2021rtneural,
        title={RTNeural: Fast Neural Inferencing for Real-Time Systems}, 
        author={Jatin Chowdhury},
        year={2021},
        journal={arXiv preprint arXiv:2106.03037}
}

How To Use

RTNeural is capable of taking a neural network that has already been trained, loading the weights from that network, and running inference. Some simple examples are available in the examples/ directory.

Exporting weights from a trained network

Neural networks are typically trained using Python libraries including Tensorflow or PyTorch. Once you have trained a neural network using one of these frameworks, you can "export" the network weights to a json file, so that RTNeural can read them. An implementation of the export process for a Tensorflow model is provided in python/model_utils.py, and can be used as follows.

# import dependencies
import tensorflow as tf
from tensorflow import keras
from model_utils import save_model

# create Tensrflow model
model = keras.Sequential()
...

# train model
model.train()

# export model weights
save_model(model, 'model_weights.json')

For an example of exporting a model from PyTorch, see this example script.

Creating a model

Next, you can create an inferencing engine in C++ directly from the exported json file:

#include <RTNeural.h>
...
std::ifstream jsonStream("model_weights.json", std::ifstream::binary);
auto model = RTNeural::json_parser::parseJson<double>(jsonStream);

Running inference

Before running inference, it is recommended to "reset" the state of your model (if the model has state).

model->reset();

Then, you may run inference as follows:

double input[] = { 1.0, 0.5, -0.1 }; // set up input vector
double output = model->forward(input); // compute output

Compile-Time API

The code shown above will create the inferencing engine dynamically at run-time. If the model architecture is fixed at compile-time, it may be preferable to use RTNeural's API for defining an inferencing engine type at compile-time, which can significantly improve performance.

// define model type
RTNeural::ModelT<double, 8, 1
    RTNeural::DenseT<double, 8, 8>,
    RTNeural::TanhActivationT<double, 8>,
    RTNeural::DenseT<double, 8, 1>
> modelT;

// load model weights from json
std::ifstream jsonStream("model_weights.json", std::ifstream::binary);
modelT.parseJson(jsonStream);

modelT.reset(); // reset state

double input[] = { 1.0, 0.5, -0.1 }; // set up input vector
double output = modelT.forward(input); // compute output

Loading Layers from PyTorch

The above example code assumes that the trained model has been exported from TensorFlow. For loading PyTorch models, the RTNeural namespace RTNeural::torch_helpers, provides helper functions for loading layers exported from PyTorch.

// load model weights from json
std::ifstream jsonStream("model_weights.json", std::ifstream::binary);
nlohmann::json modelJson;
jsonStream >> modelJson;

// load a layer from a static model
RTNeural::ModelT<float, 1, 1, RTNeural::DenseT<float, 1, 1>> model;
RTNeural::torch_helpers::loadDense(modelJson, "name_of_layer.", model.get<0>());

For more examples, see the examples/torch directory.

Building with CMake

RTNeural is built with CMake, and the easiest way to link is to include RTNeural as a submodule:

...
add_subdirectory(RTNeural)
target_link_libraries(MyCMakeProject LINK_PUBLIC RTNeural)

If you are trying to use RTNeural in a project that does not use CMake, please see the instructions below.

Choosing a Backend

RTNeural supports three backends, Eigen, xsimd, or the C++ STL. You can choose your backend by passing either -DRTNEURAL_EIGEN=ON, -DRTNEURAL_XSIMD=ON, or -DRTNEURAL_STL=ON to your CMake configuration. By default, the Eigen backend will be used. Alternatively, you may select your choice of backends in your CMake configuration as follows:

set(RTNEURAL_XSIMD ON CACHE BOOL "Use RTNeural with this backend" FORCE)
add_subdirectory(modules/RTNeural)

In general, the Eigen backend typically has the best performance for larger networks, while smaller networks may perform better with XSIMD. However, it is recommended to measure the performance of your network with all the backends that are available on your target platform to ensure optimal performance. For more information see the benchmark results.

RTNeural also has experimental support for Apple's Accelerate framework (-DRTNEURAL_ACCELERATE=ON). Please note that the Accelerate backend can only be used when compiling for Apple devices, and does not currently support defining compile-time inferencing engines.

Note that you must abide by the licensing rules of whichever backend library you choose.

Other configuration flags

If you would like to build RTNeural with the AVX SIMD extensions, you may run CMake with the -DRTNEURAL_USE_AVX=ON. Note that this flag will have no effect when compiling for platforms that do not support AVX instructions.

Building the Unit Tests

To build RTNeural's unit tests, run cmake -Bbuild -DBUILD_TESTS=ON, followed by cmake --build build. To run the full testing suite, run ./build/rtneural_tests all. For more information, run ./build/rtneural_tests --help.

Building the Performance Benchmarks

To build the performance benchmarks, run cmake -Bbuild -DBUILD_BENCH=ON, followed by cmake --build build --config Release. To run the layer benchmarks, run ./build/rtneural_layer_bench <layer> <length> <in_size> <out_size>. To run the model benchmark, run ./build/rtneural_model_bench.

Building the Examples

To build the RTNeural examples run:

cmake -Bbuild -DBUILD_EXAMPLES=ON
cmake --build build --config Release

The example programs will then be located in build/examples_out/, and may be run from there.

An example of using RTNeural within a real-time audio plugin can be found on GitHub here.

Building without CMake

If you wish to use RTNeural in a project that doesn't use CMake, RTNeural can be included as a header-only library, along with a few extra steps.

  1. Add a compile-time definition to define a default byte alignment for RTNeural. For most cases this definition will be one of either:

    • RTNEURAL_DEFAULT_ALIGNMENT=16
    • RTNEURAL_DEFAULT_ALIGNMENT=32
  2. Add a compile-time definition to select a backend. If you wish to use the STL backend, then no definition is required. This definition should be one of the following:

    • RTNEURAL_USE_EIGEN=1
    • RTNEURAL_USE_XSIMD=1
  3. Add the necessary include paths for your chosen backend. This path will be one of either:

    • <repo>/modules/Eigen
    • <repo>/modules/xsimd/include/xsimd

It may also be worth checking out the example Makefile.

Contributing

Contributions to this project are most welcome! Currently, there is considerable need for the following improvements:

  • Improved support for 2-dimensional input/output data.
  • More robust support for exporting/loading models.
  • Support for more activation layers.
  • Any changes that improve overall performance.

General code maintenance and documentation is always appreciated as well! Note that if you are implementing a new layer type, it is not required to provide support for all the backends, though it is recommended to at least provide a "fallback" implementation using the STL backend.

Contributors

Please thank the following individuals for their important contributions:

  • wayne-chen: Softmax activation layer and general API improvements
  • hollance: RTNeural logo
  • stepanmk: Eigen Conv1D layer optimization
  • DamRsn: Eigen implementations for Conv2D and BatchNorm2D layers

Powered by RTNeural

RTNeural is currently being used by several audio plugins and other projects:

  • 4000DB-NeuralAmp: Neural emulation of the pre-amp section from the Akai 4000DB tape machine.
  • AIDA-X: An AU/CLAP/LV2/VST2/VST3 audio plugin that loads RTNeural models and cabinet IRs.
  • BYOD: A guitar distortion plugin containing several machine learning-based effects.
  • Chow Centaur: A guitar pedal emulation plugin, using a real-time recurrent neural network.
  • Chow Tape Model: An analog tape emulation, using a real-time dense neural network.
  • cppTimbreID: An audio feature extraction library.
  • GuitarML: GuitarML plugins use machine learning to model guitar amplifiers and effects.
  • MLTerror15: Deeply learned simulator for the Orange Tiny Terror with Recurrent Neural Networks.
  • NeuralNote: An audio-to-MIDI transcription plugin using Spotify's basic-pitch model.
  • rt-neural-lv2: A headless lv2 plugin using RTNeural to model guitar pedals and amplifiers.
  • Tone Empire plugins:
    • LVL - 01: An A.I./M.L.-based compressor effect.
    • TM700: A machine learning tape emulation effect.
    • Neural Q: An analog emulation 2-band EQ, using recurrent neural networks.
  • ToobAmp: Guitar effect plugins for the Raspberry Pi.

If you are using RTNeural in one of your projects, let us know and we will add it to this list!

License

RTNeural is open source, and is licensed under the BSD 3-clause license.

Enjoy!

More Repositories

1

AnalogTapeModel

Physical modelling signal processing for analog tape recording
C++
1,093
star
2

KlonCentaur

Digital emulation of the Klon Centaur guitar pedal using RNNs, Wave Digital Filters, and more
C++
309
star
3

ComplexNonlinearities

Complex Nonlinearities for Audio Signal Processing
C++
176
star
4

WaveDigitalFilters

Circuit Modelling with Wave Digital Filters
C++
97
star
5

ChowDSP-VCV

ChowDSP modules for VCV Rack
C++
82
star
6

ChowPhaser

Phaser effect based loosely on the Schulte Compact Phasing 'A'
C++
76
star
7

time-stretcher

Audio time-stretching algorithm
C++
73
star
8

ADAA

Experiments with Antiderivative Antialiasing
C++
66
star
9

audio_dspy

A Python package for audio signal processing tools
Python
65
star
10

distortion-rs

An example audio plugin built with JUCE6 and Rust
CMake
56
star
11

differentiable-wdfs

Differentiable Wave Digital Filters
C++
47
star
12

Bad-Circuit-Modelling

Correct modelling of incorrect circuits
C++
36
star
13

RTNeural-example

An example project for RTNeural
C++
32
star
14

CrossroadsEffects

At the crossroads of programming your own audio effects, and letting your audio effects be programmed for you.
Python
31
star
15

NonIntegerSRC

Faster Non-Integer Sample Rate Conversion
C++
27
star
16

DrumFixer

Audio plugin for editing drum tones
C++
22
star
17

NewMixer

A unique and revolutionary audio mixing tool
C++
22
star
18

Feedback-Delay-Networks

Time-varying, nonlinear, fun-loving FDNs
C++
20
star
19

plugin-ci-example

An example of an audio plugin with Continuous Integration
Shell
20
star
20

FIRBenchmarks

A minimal JUCE console app to compare the performance of FIR filtering algorithms
C++
20
star
21

RNNAudioEffects

Real-time audio effects using single sample recurrent neural networks
C++
19
star
22

RTNeural-Variant

An example audio plugin using RTNeural with a variant model type
C++
19
star
23

modal-waterbottles

Modal models of waterbottles
C++
18
star
24

MasterEars

A listening test app for mixing/mastering engineers
C++
17
star
25

SampleRateRNN

Sample rate agnostic recurrent neural networks
C++
15
star
26

PluginRunner

Utility command-line application for running audio through a plugin
C++
14
star
27

ADC_VAModelling_2023

CMake
14
star
28

Aphex_Exciter

Virtual Analog Model of an Aphex Type B Aural Exciter
Jupyter Notebook
13
star
29

wdf-bakeoff

Comparing the performance of Wave Digital Filter implementations
C++
13
star
30

RTNeural-compare

Comparing performance for neural network inferencing libraries in C++
C++
12
star
31

R-Solver

Python/Sage Tool for deriving Scattering Matrices for WDF R-Adaptors
Python
12
star
32

BBDDelay

Experimenting with delay lines using bucket-brigade device emulation
C++
11
star
33

juce_clap_hosting

CLAP plugin hosting in JUCE
C++
9
star
34

vscode-supercollider

A VS Code extension for the SuperCollider language
JavaScript
8
star
35

RTNeural-SIMDRuntimePlugin

An example plugin using RTNeural with a SIMD architecture determined at run-time
CMake
7
star
36

DX7Learning

CS 230 Class Project
Python
6
star
37

notGuitar

notGuitar is an audio DSP system meant to perform a timbral conversion of single note melodies from guitar to saxophone. notGuitar is designed to run in real-time on a TI DSK6713 DSP board, using no MIDI conversion or pre-recorded samples.
MATLAB
6
star
38

NUPR-Filterbanks

Non-Uniform Perfect Reconstruction Filterbanks
C++
5
star
39

types_list

Types list template object for C++17
C++
4
star
40

clap-jai-example-plugin

A simple example CLAP plugin made with Jai
Shell
4
star
41

LinkTestPlugin

A minimal test audio plugin containing an Ableton Link instance
CMake
3
star
42

RTNeural-Experimental

RTNeural Experiments
C++
2
star
43

ChowTunes

C++
2
star
44

jatinchowdhury18

2
star
45

WebAudioDistortion

A web application distortion effect
JavaScript
1
star
46

matplotlib-cpp

A minimal fork of https://github.com/lava/matplotlib-cpp
C++
1
star
47

file-player-plugin

C++
1
star
48

fp-from-scratch

C++
1
star