• Stars
    star
    302
  • Rank 137,202 (Top 3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official PyTorch implementation of Contrastive Learning of Musical Representations

Contrastive Learning of Musical Representations

PyTorch implementation of Contrastive Learning of Musical Representations by Janne Spijkervet and John Ashley Burgoyne.

CLMR Open In Colab

arXiv Supplementary Material

You can run a pre-trained CLMR model directly from within your browser using ONNX Runtime: here.

In this work, we introduce SimCLR to the music domain and contribute a large chain of audio data augmentations, to form a simple framework for self-supervised learning of raw waveforms of music: CLMR. We evaluate the performance of the self-supervised learned representations on the task of music classification.

  • We achieve competitive results on the MagnaTagATune and Million Song Datasets relative to fully supervised training, despite only using a linear classifier on self-supervised learned representations, i.e., representations that were learned task-agnostically without any labels.
  • CLMR enables efficient classification: with only 1% of the labeled data, we achieve similar scores compared to using 100% of the labeled data.
  • CLMR is able to generalise to out-of-domain datasets: when training on entirely different music datasets, it is still able to perform competitively compared to fully supervised training on the target dataset.

This is the CLMR v2 implementation, for the original implementation go to the v1 branch

CLMR model
An illustration of CLMR.

This repository relies on my SimCLR implementation, which can be found here and on my torchaudio-augmentations package, found here.

Quickstart

git clone https://github.com/spijkervet/clmr.git && cd clmr

pip3 install -r requirements.txt
# or
python3 setup.py install

The following command downloads MagnaTagATune, preprocesses it and starts self-supervised pre-training on 1 GPU (with 8 simultaneous CPU workers) and linear evaluation:

python3 preprocess.py --dataset magnatagatune

# add --workers 8 to increase the number of parallel CPU threads to speed up online data augmentations + training.
python3 main.py --dataset magnatagatune --gpus 1 --workers 8

python3 linear_evaluation.py --gpus 1 --workers 8 --checkpoint_path [path to checkpoint.pt, usually in ./runs]

Pre-train on your own folder of audio files

Simply run the following command to pre-train the CLMR model on a folder containing .wav files (or .mp3 files when editing src_ext_audio=".mp3" in clmr/datasets/audio.py). You may need to convert your audio files to the correct sample rate first, before giving it to the encoder (which accepts 22,050Hz per default).

python preprocess.py --dataset audio --dataset_dir ./directory_containing_audio_files

python main.py --dataset audio --dataset_dir ./directory_containing_audio_files

Results

MagnaTagATune

Encoder / Model Batch-size / epochs Fine-tune head ROC-AUC PR-AUC
SampleCNN / CLMR 48 / 10000 Linear Classifier 88.7 35.6
SampleCNN / CLMR 48 / 10000 MLP (1 extra hidden layer) 89.3 36.0
SampleCNN (fully supervised) 48 / - - 88.6 34.4
Pons et al. (fully supervised) 48 / - - 89.1 34.92

Million Song Dataset

Encoder / Model Batch-size / epochs Fine-tune head ROC-AUC PR-AUC
SampleCNN / CLMR 48 / 1000 Linear Classifier 85.7 25.0
SampleCNN (fully supervised) 48 / - - 88.4 -
Pons et al. (fully supervised) 48 / - - 87.4 28.5

Pre-trained models

Links go to download

Encoder (batch-size, epochs) Fine-tune head Pre-train dataset ROC-AUC PR-AUC
SampleCNN (96, 10000) Linear Classifier MagnaTagATune 88.7 (89.3) 35.6 (36.0)
SampleCNN (48, 1550) Linear Classifier MagnaTagATune 87.71 (88.47) 34.27 (34.96)

Training

1. Pre-training

Simply run the following command to pre-train the CLMR model on the MagnaTagATune dataset.

python main.py --dataset magnatagatune

2. Linear evaluation

To test a trained model, make sure to set the checkpoint_path variable in the config/config.yaml, or specify it as an argument:

python linear_evaluation.py --checkpoint_path ./clmr_checkpoint_10000.pt

Configuration

The configuration of training can be found in: config/config.yaml. I personally prefer to use files instead of long strings of arguments when configuring a run. Every entry in the config file can be overrided with the corresponding flag (e.g. --max_epochs 500 if you would like to train with 500 epochs).

Logging and TensorBoard

To view results in TensorBoard, run:

tensorboard --logdir ./runs

More Repositories

1

SimCLR

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations by T. Chen et al.
Python
742
star
2

torchaudio-augmentations

Audio transformations library for PyTorch
Python
216
star
3

BYOL

Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Python
128
star
4

eurovision-dataset

The Eurovision Song Contest Dataset is a freely-available dataset containing audio features, metadata, contest ranking and voting data of 1735 songs that have competed in the Eurovision Song Contests between 1956 and 2023.
Python
86
star
5

contrastive-predictive-coding

PyTorch implementation of Representation Learning with Contrastive Predictive Coding by Van den Oord et al. (2018)
Python
80
star
6

godfather

The Godfather resource for GTA:Network's online modification for GTA:V. The mod can be downloaded at: https://gtanet.work
JavaScript
30
star
7

Context-Aware-Sequential-Recommendation

This is the Github repository containing the code for the Context-Aware Sequential Recommendation project for the Information Retrieval 2 course at the University of Amsterdam
Python
11
star
8

crypto-data-scraper

Crypto data scraper using Websockets and MongoDB to receive real-time data from cryptocurrency exchanges and save it for historic analysis (machine learning, etc).
Python
10
star
9

gpt-2-lyrics

Using GPT-2 to generate lyrics
Python
6
star
10

midi-controller

MIDI controller made with React and Flask, for use with Ableton or other DAWs
JavaScript
5
star
11

atom-latex-online

Atom Latex Online package
JavaScript
3
star
12

thesis

My Master's Thesis
TeX
3
star
13

sat_sudoku_solver

SAT solver for Sudoku's for the UvA MSc AI course Knowledge Representation
Jupyter Notebook
2
star
14

flask-socketio-bootstrap4-boilerplate

Boilerplate for a Flask webserver, with SocketIO and Bootstrap 4 integrated.
JavaScript
2
star
15

global_food_prices

Data visualization project for UvA on the Global Food Prices dataset.
HTML
2
star
16

weebo

An intelligent personal assistant inspired by the Weebo robot from the popular 1997 movie Flubber.
JavaScript
2
star
17

search_engine

Search engine for arxiv submissions
JavaScript
2
star
18

qualitative_reasoning

Qualitative Reasoning assignment VU
Python
2
star
19

personal-website

My personal website written in the Gatsby framework with a Ghost backend
JavaScript
1
star
20

dutch_jurisdiction_elastic_search

Elastic Search for Dutch jurisdiction archive (rechtspraak.nl)
Python
1
star
21

juce-simple-eq

Simple EQ made in JUCE 6
C++
1
star
22

SETUP-smartlappen

SETUP x Smartlappen project
HTML
1
star
23

homelab

My Homelab built on Docker
Shell
1
star
24

ai-music-presentation

Presentation on Music an AI (Mon 22 January 2018)
Jupyter Notebook
1
star