torchaudio: an audio library for PyTorch
The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the same philosophy of providing strong GPU acceleration, having a focus on trainable features through the autograd system, and having consistent style (tensor names and dimension names). Therefore, it is primarily a machine learning library and not a general signal processing library. The benefits of PyTorch can be seen in torchaudio through having all the computations be through PyTorch operations which makes it easy to use and feel like a natural extension.
- Support audio I/O (Load files, Save files)
- Load a variety of audio formats, such as
wav
,mp3
,ogg
,flac
,opus
,sphere
, into a torch Tensor using SoX - Kaldi (ark/scp)
- Load a variety of audio formats, such as
- Dataloaders for common audio datasets
- Common audio transforms
- Compliance interfaces: Run code using PyTorch that align with other libraries
Installation
Please refer to https://pytorch.org/audio/main/installation.html for installation and build process of TorchAudio.
Quick Usage
import torchaudio
waveform, sample_rate = torchaudio.load('foo.wav') # load tensor from file
torchaudio.save('foo_save.wav', waveform, sample_rate) # save tensor to file
Backend Dispatch
By default in OSX and Linux, torchaudio uses SoX as a backend to load and save files. The backend can be changed to SoundFile using the following. See SoundFile for installation instructions.
import torchaudio
torchaudio.set_audio_backend("soundfile") # switch backend
waveform, sample_rate = torchaudio.load('foo.wav') # load tensor from file, as usual
torchaudio.save('foo_save.wav', waveform, sample_rate) # save tensor to file, as usual
Note
- SoundFile currently does not support mp3.
- "soundfile" backend is not supported by TorchScript.
API Reference
API Reference is located here: http://pytorch.org/audio/main/
Contributing Guidelines
Please refer to CONTRIBUTING.md
Citation
If you find this package useful, please cite as:
@article{yang2021torchaudio,
title={TorchAudio: Building Blocks for Audio and Speech Processing},
author={Yao-Yuan Yang and Moto Hira and Zhaoheng Ni and Anjali Chourdia and Artyom Astafurov and Caroline Chen and Ching-Feng Yeh and Christian Puhrsch and David Pollack and Dmitriy Genzel and Donny Greenberg and Edward Z. Yang and Jason Lian and Jay Mahadeokar and Jeff Hwang and Ji Chen and Peter Goldsborough and Prabhat Roy and Sean Narenthiran and Shinji Watanabe and Soumith Chintala and Vincent Quenneville-BΓ©lair and Yangyang Shi},
journal={arXiv preprint arXiv:2110.15018},
year={2021}
}
Disclaimer on Datasets
This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.
If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
Pre-trained Model License
The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
For instance, SquimSubjective model is released under the Creative Commons Attribution Non Commercial 4.0 International (CC-BY-NC 4.0) license. See the link for additional details.
Other pre-trained models that have different license are noted in documentation. Please checkout the documentation page.