• Stars
    star
    116
  • Rank 303,894 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created over 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Headless multitrack mixing console in Python

pymixconsole pymixconsole

Headless multitrack mixing console in Python

Installation

pip install git+https://github.com/csteinmetz1/pymixconsole

Usage

Setup a mixing console with a set of tracks from a multitrack project and apply processing per block. By default, a console will contain n channels and each channel will have a series of default processors:

gain -> polarity inverter -> parametric EQ -> compressor -> gain (fader) -> stereo panner

These are setup in such a way that if you do not modify their settings the signal should pass largely unprocessed. Additionally a console is initialized with two effect busses, one for reverb and one for delay. Finally there is a master bus which sums the output of all the busses and channels and then applies a simple processing chain:

parametric EQ -> compressor

In the example below you can see how to initialize a console and then pass multitrack data into the console and process it block by block to get the output.

Basic processing

One way to apply processing is to create a multidimensional array of shape [samples, tracks/channels], where each channel is a mono stream of audio, which will be processed by the associated channel in the console.

In this example we create an array with 8 channels of audio and then instantiate a default console with 8 channels. Then we iterate over the input data by the block_size and we pass each block to the console's process_block() function, which takes this array, applies each channel processor, and return a stereo mix. We then store this output in our pre-allocated array. We finally save this data to a .wav file with pySoundFile as the end.

import numpy as np
import soundfile as sf
import pymixconsole as pymc

data = np.random.rand(44100,8)   # one second of audio for 8 mono tracks
rate = 44100                     # 44.1 kHz sampling rate
block_size = 512                 # processor block size

# create a mix console with settings that match our audio data
console = pymc.Console(block_size=block_size, sample_rate=rate, num_channels=8)

# array to hold the output of the console (stereo)
out = np.empty(shape=(data.shape[0], 2))

# iterate over each block of data
for i in range(data.shape[0]//block_size):

    start = i * block_size 
    stop  = start + block_size

    out[start:stop,:] = console.process_block(data[start:stop,:])

# save out the processed audio
sf.write("output.wav", out, rate)

Console control

pymixconsole provides a high level of control over how the mix console is set up. By default, a console will include the supplied number of channels, as well as two busses (one for reverb, one for delay) and a master bus which features a compressor and equalizer. By default each channel is created with a pre-gain, polarity inverter, equaliser, compressor, post-gain, and a panner.

There are three levels of processors for each channel: pre-processors, core-processors, and post-processors. The distinction is useful since we want to impose some constraints on how these processors may be randomized in our randomize() method. The simple explanation is that the order of pre and post processors is never shuffled, while core-processors can be.

The defaults were chosen to be a good starting place for basic processing, but the user can customize this completely. For example, we can at any time add an extra processor to a channel as follows. Here we add a second compressor to the third channel's core-processors (zero-indexed), and then change the threshold parameter.

console.channels[2].processors.add(pymc.processors.Compressor(name="second-comp"))
console.channels[2].processor.get("second-comp").parameters.threshold.value = -22.0

Processor API

A number of basic processor units are included which can be included on a channel, bus, or the master bus.

  • Gain
  • Polarity inverter
  • Converter
  • Panner
  • Equaliser
  • Compressor
  • Delay
  • Distortion
  • Reverb

Gain

Parameter Min. Max. Default Units Type Options
gain -80.0 24.0 0.0 dB float

Panner

Parameter Min. Max. Default Units Type Values
pan 0.0 1.0 0.5 float
outputs 2 2 2 outputs int
pan_law "-4.5dB" string "linear", "constant_power", "-4.5dB"

Equalizer

Parameter Min. Max. Default Units Type Values
low_shelf_gain -24.0 24.0 0.0 dB float
low_shelf_freq 20.0 1000.0 80.0 Hz float
first_band_gain -24.0 24.0 0.0 dB float
first_band_freq 200.0 5000.0 400.0 Hz float
first_band_q 0.1 10.0 0.7 float
second_band_gain -24.0 24.0 0.0 dB float
second_band_freq 500.0 6000.0 1000.0 Hz float
second_band_q 0.1 10.0 0.7 float
third_band_gain -24.0 24.0 0.0 dB float
third_band_freq 2000.0 10000.0 5000.0 Hz float
third_band_q 0.1 10.0 0.7 float
high_shelf_gain -24.0 24.0 0.0 dB float
high_shelf_freq 8000.0 20000.0 10000.0 Hz float

Delay

Parameter Min. Max. Default Units Type Values
delay 0 65536 5000 samples int
feedback 0.0 1.0 0.3 float
dry_mix 0.0 1.0 0.9 float
wet_mix 0.0 1.0 0.0 float

Compressor

Parameter Min. Max. Default Units Type Values
threshold -80.0 0.0 0.0 dB float
attack_time 0.001 500.0 10.0 ms float
release_time 0.0 1.0 100.0 ms float
ratio 1.0 100.0 2.0 float
makeup_gain -12.0 24.0 0.0 dB float

Algorithmic reverb

Parameter Min. Max. Default Units Type Values
room_size 0.1 1.0 0.5 float
damping 0.0 1.0 1.0 float
dry_mix 0.0 1.0 0.9 float
wet_mix 0.0 1.0 0.1 float
stereo_spread 0 100 23 int

Convolutional reverb

Parameter Min. Max. Default Units Type Values
dry_mix 0.0 1.0 0.9 float
wet_mix 0.0 1.0 0.1 float
decay 0.0 1.0 1.0 float
type "-4.5dB" string "sm-room", "md-room", "lg-room", "hall", "plate"

Cite

If you use this in your work please consider citing:

  @article{steinmetz2020mixing,
            title={Automatic multitrack mixing with a differentiable mixing console of neural audio effects},
            author={Steinmetz, Christian J. and Pons, Jordi and Pascual, Santiago and SerrΓ , Joan},
            journal={arXiv:2010.10291},
            year={2020}}

More Repositories

1

ai-audio-startups

Community list of startups working with AI in audio and music technology
1,543
star
2

auraloss

Collection of audio-focused loss functions in PyTorch
Python
731
star
3

pyloudnorm

Flexible audio loudness meter in Python with implementation of ITU-R BS.1770-4 loudness algorithm
Python
635
star
4

dasp-pytorch

Differentiable audio signal processors in PyTorch
Python
226
star
5

steerable-nafx

Steerable discovery of neural audio effects
Jupyter Notebook
201
star
6

micro-tcn

Efficient neural networks for analog audio effect modeling
Python
150
star
7

ronn

Randomized overdrive neural networks
Jupyter Notebook
137
star
8

wavebeat

End-to-end beat and downbeat tracking in the time domain.
Python
118
star
9

AutomaticMixingPapers

Important papers and associated code on automatic mixing research
HTML
102
star
10

automix-toolkit

Models and datasets for training deep learning automatic mixing models
Python
95
star
11

IIRNet

Direct design of biquad filter cascades with deep learning by sampling random polynomials.
Python
83
star
12

NeuralReverberator

Reverb synthesis via a spectral autoencoder
Python
80
star
13

flowEQ

Ξ²-VAE for intelligent control of a five band parametric EQ
MATLAB
67
star
14

bela-zlc

Zero-latency convolution on Bela platform
C++
26
star
15

MixCNN

Convolutional Neural Network for multitrack mix leveling
Python
18
star
16

neural-2a

Neural network model of the analog LA-2A dynamic range compressor
CMake
17
star
17

findio

The Spotify search you don't need and never wanted
HTML
13
star
18

computational-music-creativity

Materials for the Computational Music Creativity course at UPF-MTG (Spring 2020)
TeX
12
star
19

PhaseAnalyzer

C++ plugin built with the JUCE Framework to provide insight about the relative phase relationship of audio signals
C++
10
star
20

pyloudnorm-eval

Evaluation of a number of loudness meter implementations
Python
10
star
21

Cinuosity

Novel playlist generation and music discovery in Spotify
JavaScript
9
star
22

mids

Implementation of content-based audio search algorithm.
Python
8
star
23

auxCord

Sync Spotify accounts to build tailored playlists
JavaScript
7
star
24

youtube-audio-dl

Utility to automate download and normalization of YouTube audio streams
Python
6
star
25

amida

audio mixing interface for data acquisition
Python
5
star
26

pyreqs

Easily build requirements.txt files automatically
Python
4
star
27

machine-learning

Materials for the Machine Learning course at UPF-MTG (Winter 2019)
Jupyter Notebook
4
star
28

consynthance

Studying consonance as a result of vocal similarity
Jupyter Notebook
4
star
29

arte

generative artwork created with canvas-sketch
JavaScript
3
star
30

LDA-Music

LDA topic modeling of raw audio data for music suggestions
Python
3
star
31

ML4AP

Slides for my talk Applications of machine learning for assistive and creative audio plugins
JavaScript
3
star
32

cavae

Covert art variational autoencoder for generating new cover art
Python
3
star
33

aes-presenters-145th

Analysis of papers and presenters at the 145th AES Convention in NYC
Python
2
star
34

AudioTechTalks-S19

Materials and associated code for audio technology talks at Clemson University - Spring 2019
JavaScript
2
star
35

aes-stats-147th

Analysis of papers from the 147th AES Convention in NYC
Python
2
star
36

macOS-laptop

Setup script for config and installation on a fresh macOS machine
Shell
2
star
37

tempnetic

Tempo estimation
Python
2
star
38

sBucket

Build large Spotify playlists using user top tracks and seed track recommendations
Python
1
star
39

ev-sound-analysis

Analyzing audio from electric vehicles to determine FMVSS 141 compliance
Python
1
star
40

personal-website

Personal website built with Angular 7 and Bootstrap 4
HTML
1
star
41

LoudnessHistory

An analysis of the perceived loudness of music over time.
Python
1
star