• This repository has been archived on 21/Jun/2023
  • Stars
    star
    186
  • Rank 207,316 (Top 5 %)
  • Language
    C++
  • License
    Other
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

RAVE VST (BETA)


This project has been archived

You should not expect any update or support for this project until further notice. If you are interested in getting RAVE to run in realtime, consider using nn~ !

Include RAVE models in your DAW for realtime deep learning based processing

  • VST / AU / Standalone plugins available
  • MacOS (M1 works, but you'll need to build yourself) & Unix (Windows build is still experimental at this point)
  • Reconstruction & Prior modes available

rave_audition


1) How to use

Audio settings

Click on the button with an arrow on the right to open / close the audio settings panel. Here you can adjust:

  • Input: Gain, Channel, Compression threshold & ratio
  • Output: Gain & Dry / Wet mix
  • Buffer size: Internal buffer size used (small buffer size = low latency, more audio clicks)

You can switch between reconstruction & prior modes using the tickbox on the left

Reconstruction mode

In reconstruction mode, RAVE will use the audio input given by your DAW and reconstruct it

  • Latent bias & scale can be changed for each of the 1st 8 latents: Select the latent you want to edit on the central wheel, then use the two bottom left buttons to change the values
Prior mode

In Prior mode, RAVE will move through the latent space using its prior

  • You can adjust the latent noise, which will add noise to all latent dimensions

Stereo Width

This knob set the audio separation between the two output channels of the RAVE model.
Those two channels have the same input but the random sampling differences will produce slightly different outputs, resulting in a nice stereo effect


Model Explorer

The Model Explorer Button switches to the model explorer window.

  • Model download:
    Download models available from our API
    If you want to submit your own checkpoints to be available via our API please open an issue tagged "enhancement" and we'll gladly serve them :)
  • Custom models import:
    Use this to select your custom models in your file explorer, this will put them in the right folder and refresh the available models list.
    If you want to manage yourself your local models, the files are located in:
    • ~/.config/ACIDS/RAVE/ (UNIX)
    • ~/Library/Application Support/ACIDS/RAVE/ (MacOS)

Using your own trained models

If you want to be able to use your trained models in the VST you have to export them with the --stereo true flag
Then use the VST import button to move the files in the correct folder as explained in the previous section


2) How to install

To get the precompiled binaries

  • Go to the "Actions" panel of this repository
  • Select the last run
  • Download the binaries for your OS

3) How to build

We use Cmake for the build process
PyTorch libraries (And MKL if you're on UNIX) will be downloaded automatically

Tested environments:

OS CMake C++ Compiler Available formats Notes
MacOS 10.15.7 3.21.3 Clang 11.0.3 VST / Standalone / AU
MacOS M1 12.3.1 3.20.3 Clang 12.0.0 VST / Standalone / AU Clang ARM
Ubuntu 20.04.4 LTS G++ 9.4.0 VST / Standalone
Arch Linux 3.23.2 G++ 12.1.0 VST / Standalone use JUCE:develop branch, see issue #19
Fedora 33 3.19.7 G++ 10.3.1 Standalone
Windows 10 3.23.1 Standalone Experimental

1) If compiling on UNIX, install the needed dependencies:

  • Ubuntu:
    sudo apt-get update && sudo apt-get install -y git cmake g++ libx11-dev libxrandr-dev libxinerama-dev libxcursor-dev libfreetype-dev libcurl4-openssl-dev libasound2-dev
  • Fedora:
    sudo dnf update ; sudo dnf install git cmake g++ libX11-devel libXrandr-devel libXinerama-devel libXcursor-devel freetype-devel libcurl-devel alsa-lib-devel
  • Arch Linux: sudo pacman -S git cmake gcc libx11 libxrandr libxinerama libxcursor freetype2 libcurl-compat alsa-lib (or libcurl-gnutls)

2) Clone the repository:

cd {YOUR_INSTALL_FOLDER} ; git clone [email protected]:acids-ircam/rave_vst.git ; cd rave_vst

3) Get Juce:

  • Ubuntu / Fedora 33: git submodule update --init --recursive --progress
  • Arch Linux: git clone -b develop --single-branch https://github.com/juce-framework/JUCE; mv JUCE juce

4) Setup the build:

mkdir build; cd build
cmake .. -DCMAKE_BUILD_TYPE=Release

5) Build:

cmake --build . --config Release -j 4

6) Enjoy!

Once the build process finished you'll find the compiled binaries located in rave-vst/build/rave-vst_artefacts/Release/

  • MacOS: ./build/rave-vst_artefacts/Release/Standalone/RAVE.app/Contents/MacOS/RAVE
  • UNIX: ./build/rave-vst_artefacts/Release/Standalone/RAVE
  • Windows: ./build/rave-vst_artefacts/Release/Standalone/RAVE.exe

More Repositories

1

RAVE

Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
Python
1,283
star
2

diffusion_models

A series of tutorial notebooks on denoising diffusion probabilistic models in PyTorch
Jupyter Notebook
607
star
3

ddsp_pytorch

Implementation of Differentiable Digital Signal Processing (DDSP) in Pytorch
C
445
star
4

nn_tilde

Max
310
star
5

creative_ml

Creative Machine Learning course and notebook tutorials in JAX, PyTorch and Numpy
Jupyter Notebook
206
star
6

pytorch_flows

Implementation and tutorials of normalizing flows with the novel distributions module
Jupyter Notebook
158
star
7

flow_synthesizer

Universal audio synthesizer control learning with normalizing flows
Max
132
star
8

neurorack

Python
108
star
9

variational-timbre

Generative timbre spaces by perceptually regularizing variational auto-encoders
Python
56
star
10

vschaos2

vintage neural synthesis with spectral auto-encoders
Python
48
star
11

cached_conv

Python
44
star
12

wavae

Realtime Variational Autoencoder built on top of libtorch and PureData
Python
36
star
13

timbre_exploration

Additional materials for "TIMBRE LATENT SPACE: EXPLORATION AND CREATIVE ASPECTS"
SCSS
20
star
14

lottery_mir

Ultra-light MIR models with a structured lottery ticket hypothesis approach
Python
13
star
15

lottery_generative

Lottery ticket hypothesis for deep generative models
Python
11
star
16

Expressive_WAE_FADER

companion repository to the DAFx-19 paper "Assisted Sound Sample Generation with Musical Conditioning in Adversarial Auto-Encoders" by Adrien Bitton, Philippe Esling et al.
9
star
17

Timbre_MoVE

Modulated Variational Auto-Encoders for Many-to-Many Musical Timbre Transfer
8
star
18

cml

Library for the Creative Machine Learning course
Python
6
star
19

projective_orchestration

Automatic projective orchestration using neural networks.
Python
5
star
20

PianoTranscriptionTransposition

Automatic Music Transcription and Instrument Transposition with Differentiable Rendering @ The 2020 Joint Conference on AI Music Creativity
SCSS
3
star
21

waveflow

Python
3
star
22

acids-ircam.github.io

HTML
3
star
23

live_orchestral_piano

Max/MSP patch for live projective orchestration
2
star