• Stars
    star
    159
  • Rank 235,916 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 7 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Python parser and tools for MUSDB18 Music Separation Dataset

musdb

Build Status Latest Version Supported Python versions

A python package to parse and process the MUSDB18 dataset, the largest open access dataset for music source separation. The tool was originally developed for the Music Separation task as part of the Signal Separation Evaluation Campaign (SISEC).

Getting the data

musdb comes with 7 seconds excerpts (automatically downloaded) of the full dataset for quick evaluation or prototyping. The full dataset, however, needs to be downloaded via Zenodo and stored (unzipped) separately.

The dataset is hosted on Zenodo and requires that users request access, since the tracks can only be used for academic purposes. We manually check this requests. Please do not fill the form multiple times, it usually takes as less than a day to give you access.

Installation and Setup

Package installation

You can install musdb using pip:

pip install musdb

Using STEMs (Default)

MUSDB18 comes encoded in STEMS which is a multitrack audio format that uses lossy compression. The musdb package, internally, relies on FFMPEG to decode the multi-stream files. For convenience, we developed a python package called stempeg that allows to easily parse the stem files and decode them on-the-fly. When you install musdb (which depends on stempeg), it is therefore necessary to also install the FFMPEG library. The installation may differ among operating systems and python distributions:

  • On Anaconda, you can install FFMPEG using conda install -c conda-forge ffmpeg.

Alternatively you can install FFMPEG manually as follows:

  • on macOS, using homebrew: brew install ffmpeg
  • on Ubuntu/Debian: sudo apt-get install ffmpeg

Using WAV files (Optional)

If you want to use WAV files (e.g. for faster audio decoding), musdb also supports parsing and processing pre-decoded PCM/wav files. musdb comes with the ability to convert a STEMS dataset into WAV version. This script can be used from the command line by

musdbconvert path/to/musdb-stems-root path/to/new/musdb-wav-root

If you don't want to use python for this, we also provide docker based scripts to decode the dataset to WAV files.

When you use the decoded MUSDB, use the is_wav parameter when initializing the dataset.

Usage

This package should nicely integrate with your existing python numpy, tensorflow or pytorch code. Most of the steps to use musdb in your project will probably use the same first steps:

Setting up musdb

Import the musdb package in your main python function and iterate over the 7 seconds musdb tracks:

import musdb
mus = musdb.DB(download=True)
mus[0].audio

To use the full dataset, set a dataset root directory

mus = musdb.DB(root="/path/to/musdb)

where root is the path to the MUSDB18 dataset root folder. The root parameter can also be overridden using a system environment variable. Just export MUSDB_PATH=/path/to/musdb inside your bash environment. In that case no arguments would need to passed to DB().

Iterate over MUSDB18 tracks

Iterating over musdb and thus accessing the audio data is as simple as. Lets assume, we have a supervised training method train(x, y) that takes the mixture as input and the vocals as output, we can simple use:

for track in mus:
    train(track.audio, track.targets['vocals'].audio)

Tracks properties

The Track objects which makes it easy to process the audio and metadata in a pythonic way:

  • Track.name, the track name, consisting of Track.artist and Track.title.
  • Track.path, the absolute path of the mixture which might be handy to process with external applications.
  • Track.audio, stereo mixture as an numpy array of shape (nb_samples, 2).
  • Track.rate, the sample rate of the mixture.
  • Track.sources, a dictionary of sources used for this track.
  • Track.stems, an numpy tensor of all five stereo sources of shape (5, nb_samples, 2). The stems are always in the following order: ['mixture', 'drums', 'bass', 'other', 'vocals'],
  • Track.targets, a dictionary of targets provided for this track. Note that for MUSDB, the sources and targets differ only in the existence of the accompaniment, which is the sum of all sources, except for the vocals. MUSDB supports the following targets: ['mixture', 'drums', 'bass', 'other', 'vocals', 'accompaniment', 'linear_mixture']. Note that some of the targets (such as accompaniment) are dynamically mixed on the fly.

Processing training and testing subsets separately

We provide subsets for train and test for machine learning methods:

mus_train = musdb.DB(subsets="train")
mus_test = musdb.DB(subsets="test")

Use train / validation split

If you want to access individual tracks, you can access the mus tracks list by its indices, e.g. mus[2:]. To foster reproducible research, we provide a fixed validation dataset.

mus_train = musdb.DB(subsets="train", split='train')
mus_valid = musdb.DB(subsets="train", split='valid')

The list of validation tracks can be edited using the mus.setup['validation_tracks'] object.

Training Deep Neural Networks with musdb

Writing an efficient dataset generator varies across different deep learning frameworks. A very simple nรคive generator that

  • draws random tracks with replacement
  • draws random chunks of fixed length with replacement

can be easily implemented using musdb's track.chunk_start and track.chunk_duration properties which efficiently seeks to the start sample (provided in seconds) and does not load the full audio into memory first.

while True:
    track = random.choice(mus.tracks)
    track.chunk_duration = 5.0
    track.chunk_start = random.uniform(0, track.duration - track.chunk_duration)
    x = track.audio.T
    y = track.targets['vocals'].audio.T
    yield x, y

Evaluation

To Evaluate a musdb track using the popular BSSEval metrics, you can use our museval package. After pip install museval evaluation of a single track, can be done by

import museval
# provide an estimate
estimates = {
    'vocals': np.random.random(track.audio.shape),
    'accompaniment': np.random.random(track.audio.shape)
}
# evaluates using BSSEval v4, and writes results to `./eval`
print(museval.eval_mus_track(track, estimates, output_dir="./eval")

Baselines

Oracles

For oracle methods, please check out our open unmix oracle separation methods. This will show you how oracle performance is computed and gives indications for an upper bound for the quality of the separation.

Open-Unmix

We provide a state-of-the-art deep learning based separation method for PyTorch, Tensorflow and NNable at open.unmix.app.

Frequently Asked Questions

The mixture is not exactly the sum of its sources, is that intended?

This is not a bug. Since we adopted the STEMS format, we used AAC compression. Here the residual noise of the mixture is different from the sum of the residual noises of the sources. This difference does not significantly affect separation performance.

track.targets['linear_mixture'].audio

Citations

If you use the MUSDB dataset for your research - Cite the MUSDB18 Dataset

@misc{MUSDB18,
  author       = {Rafii, Zafar and
                  Liutkus, Antoine and
                  Fabian-Robert St{\"o}ter and
                  Mimilakis, Stylianos Ioannis and
                  Bittner, Rachel},
  title        = {The {MUSDB18} corpus for music separation},
  month        = dec,
  year         = 2017,
  doi          = {10.5281/zenodo.1117372},
  url          = {https://doi.org/10.5281/zenodo.1117372}
}

If compare your results with SiSEC 2018 Participants - Cite the SiSEC 2018 LVA/ICA Paper

@inproceedings{SiSEC18,
  author="St{\"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
  title="The 2018 Signal Separation Evaluation Campaign",
  booktitle="Latent Variable Analysis and Signal Separation:
  14th International Conference, LVA/ICA 2018, Surrey, UK",
  year="2018",
  pages="293--305"
}

How to contribute

musdb is a community focused project, we therefore encourage the community to submit bug-fixes and requests for technical support through github issues. For more details of how to contribute, please follow our CONTRIBUTING.md.

License

MIT

More Repositories

1

open-unmix-pytorch

Open-Unmix - Music Source Separation for PyTorch
Python
1,232
star
2

sigsep-mus-eval

museval - source separation evaluation tools for python
Python
196
star
3

norbert

Painless Wiener filters for audio separation
Python
180
star
4

website

Vue
104
star
5

open-unmix-nnabla

Open-Unmix - Music Source Separation for NNabla
Python
63
star
6

sigsep-mus-2018

SiSEC MUS 2018 Submission System
Python
43
star
7

bsseval

audio source separation evaluation metrics
Python
27
star
8

open-unmix-tensorflow

open unmix - music source separation for tensorflow
22
star
9

open-unmix-demo-electron

desktop source separation demo player
Vue
21
star
10

open-unmix-js

web based audio unmixing
JavaScript
12
star
11

sigsep-mus-2018-analysis

Analysis and Visualization for SiSEC 2018
Jupyter Notebook
9
star
12

sigsep-mus-io

Tools to convert sigsep mus dataset from STEMS <-> WAV
Shell
8
star
13

ismir2018_tutorial

Music source separation: making it work
CSS
7
star
14

open-unmix-paper-joss

Repository for the open-unmix JOSS submission
TeX
6
star
15

share

multitrack share platform
Vue
3
star
16

stem-player-website

CSS
2
star
17

slides-pytorch-summer-hackathon

HTML
2
star
18

sigsep-mus-mdbconvert

Python
2
star
19

sigsep-mus-preview-generator

Trimmer for generating the SiSEC MUS 2018 previews
Python
2
star
20

sigsep.github.io

sigsep website. Edit here: https://github.com/sigsep/website
HTML
2
star
21

eusipco2019_tutorial

Deep learning for music separation
HTML
1
star
22

sigsep-mus-2018-website

SiSEC MUS 2018 Website
Vue
1
star
23

talk-rennes2018-dl-slides

JavaScript
1
star
24

pub-SiSEC2018

Paper for SiSEC 2018, published at LVA/ICA
TeX
1
star
25

slides-deeplearning-musicseparation

JavaScript
1
star