PercepNet
Unofficial implementation of PercepNet: A Perceptually-Motivated Approach for Low-Complexity, Real-Time Enhancement of Fullband Speech described in https://arxiv.org/abs/2008.04259
Todo
- pitch estimation
- Comb filter
- ERBBand c++ implementation
- Feature(r,g,pitch,corr) Generator(c++) for pytorch
- DNNModel pytorch
- DNNModel c++ implementation
- Pretrained model
- Postfiltering (done by @TeaPoly )
Requirements
- CMake
- Sox
- Python>=3.6
- Pytorch
Prepare sampledata
- download and sythesize data DNS-Challenge 2020 Dataset before excute utils/run.sh for training.
git clone -b interspeech2020/master https://github.com/microsoft/DNS-Challenge.git
- Follow the Usage instruction in DNS Challenge repo(https://github.com/microsoft/DNS-Challenge) at interspeech2020/master branch. please modify save directories at DNS-Challenge/noisyspeech_synthesizer.cfg sampledata/speech and sampledata/noise each.
Build & Training
This repository is tested on Ubuntu 20.04(WSL2)
- setup CMake build environments
sudo apt-get install cmake
- make binary directory & build
mkdir bin && cd bin
cmake ..
make -j
cd ..
- feature generation for training with sampleData
bin/src/percepNet sampledata/speech/speech.pcm sampledata/noise/noise.pcm 4000 test.output
- Convert output binary to h5
python3 utils/bin2h5.py test.output training.h5
- Training run utils/run.sh
cd utils
./run.sh
- Dump weight from pytorch to c++ header
python3 dump_percepnet.py model.pt
- Inference
cd bin
cmake ..
make -j1
cd ..
bin/src/percepNet_run test_input.pcm percepnet_output.pcm
Acknowledgements
@jasdasdf, @sTarAnna, @cookcodes, @xyx361100238, @zhangyutf, @TeaPoly, @rameshkunasi, @OscarLiau, @YangangCao, Jaeyoung Yang
Reference
https://github.com/wil-j-wil/py_bank
https://github.com/dgaspari/pyrapt