• Stars
    star
    111
  • Rank 314,510 (Top 7 %)
  • Language
    Python
  • Created over 3 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2021] CDFI: Compression-Driven Network Design for Frame Interpolation

CDFI (Compression-Driven-Frame-Interpolation)

[Paper] [arXiv]

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021

News

An expanded version of the conference paper is released at arXiv, which reveals technical details regarding model compression based on layer-wise sparsity information obtained via optimization. That part of code is available at the sub-directory extended_version.

Introduction

We propose a Compression-Driven network design for Frame Interpolation (CDFI), that leverages model compression to significantly reduce the model size (allows a better understanding of the current architecture) while making room for further improvements and achieving superior performance in the end. Concretely, we first compress AdaCoF and show that a 10X compressed AdaCoF performs similarly as its original counterpart; then we improve upon this compressed model with simple modifications. Note that typically it is prohibitive to implement the same improvements on the original heavy model.

  • We achieve a significant performance gain with only a quarter in size compared with the original AdaCoF

    Vimeo-90K Middlebury UCF101-DVF #Params
    PSNR, SSIM, LPIPS PSNR, SSIM, LPIPS PSNR, SSIM, LPIPS
    AdaCoF 34.35, 0.956, 0.019 35.72, 0.959, 0.019 35.16, 0.950, 0.019 21.84M
    Compressed AdaCoF 34.10, 0.954, 0.020 35.43, 0.957, 0.018 35.10, 0.950, 0.019 2.45M
    AdaCoF+ 34.56, 0.959, 0.018 36.09, 0.962, 0.017 35.16, 0.950, 0.019 22.93M
    Compressed AdaCoF+ 34.44, 0.958, 0.019 35.73, 0.960, 0.018 35.13, 0.950, 0.019 2.56M
    Our Final Model 35.17, 0.964, 0.010 37.14, 0.966, 0.007 35.21, 0.950, 0.015 4.98M
  • Our final model also performs favorably against other state-of-the-arts (details refer to our paper)

  • The proposed framework is generic and can be easily transferred to other DNN-based frame interpolation method

The above GIF is a demo of using our method to generate slow motion video, which increases the FPS from 5 to 160. We also provide a long video demonstration here (redirect to YouTube).

Environment

  • CUDA 11.0

  • python 3.8.3

  • torch 1.8.1+cu111

  • torchvision 0.9.1+cu111

  • cupy 7.7.0

  • scipy 1.7.3

  • numpy 1.21.5

  • Pillow 7.2.0

  • scikit-image 0.19.2

  • lpips

Installation

conda create -n cdfi python==3.8.3
conda activate cdfi
conda install -c conda-forge cupy==7.7.0
pip install torch==1.8.1+cu111 torchvision -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
pip install opencv-python lpips
conda install matplotlib scikit-image

Test Pre-trained Models

Download repository:

$ git clone https://github.com/tding1/CDFI.git
$ cd CDFI/

Testing data

For user convenience, we already provide the Middlebury and UCF101-DVF test datasets in our repository, which can be found under directory test_data/.

Evaluation metrics

We use the built-in functions in skimage.metrics to compute the PSNR and SSIM, for which the higher the better. We also use LPIPS to measure perceptual similarity, for which the smaller the better.

Note: We are using squeeze net in calculating LPIPS, while other work (Softsplat, EDSC, etc) might use different methods in their original implementations, e.g., alex net. Although we manually test AdaCoF, EDSC, CAIN under the same setting and demonstrate the results in the paper, there may be discrepancies from their original results, see also the discussion here.

Test our pre-trained CDFI model

$ python mytest.py --gpu_id 0

By default, it will load our pre-trained model checkpoints/CDFI_adacof.pth. It will print the quantitative results on both Middlebury and UCF101-DVF, and the interpolated images will be saved under test_output/cdfi_adacof/.

Test the compressed AdaCoF

$ python mytest.py --gpu_id 0 --model compressed_adacof --kernel_size 5 --dilation 1

By default, it will load the compressed AdaCoF model checkpoints/compressed_adacof_F_5_D_1.pth. It will print the quantitative results on both Middlebury and UCF101-DVF, and the interpolated images will be saved under test_output/compressed_adacof_F_5_D_1/.

Test the compressed AdaCoF+

$ python mytest.py --gpu_id 0 --model compressed_adacof --kernel_size 11 --dilation 2

By default, it will load the compressed AdaCoF+ model checkpoints/compressed_adacof_F_11_D_2.pth. It will print the quantitative results on both Middlebury and UCF101-DVF, and the interpolated images will be saved under test_output/compressed_adacof_F_11_D_2/.

Interpolate two frames

$ python interpolate_twoframe.py --gpu_id 0 --first_frame imgs/0.png --second_frame imgs/1.png --output_frame ./output.png

By default, it will load our pre-trained model checkpoints/CDFI_adacof.pth, and generate the intermediate frame output.png given two consecutive frames in a sequence.

Interpolate video

$ python interpolate_video.py --gpu_id 0 --input_video imgs/img_seq/ --output_video ./interpolated_video

This script will interpolate a video sequence using our pre-trained model checkpoints/CDFI_adacof.pth, thus increasing the FPS by a factor of 2. You may want to repeat the procedure on the interpolated video if a higher FPS is desired.

Train Our Model

Training data

We use the Vimeo-90K triplet dataset for video frame interpolation task, which is relatively large (>32 GB).

$ wget http://data.csail.mit.edu/tofu/dataset/vimeo_triplet.zip
$ unzip vimeo_triplet.zip
$ rm vimeo_triplet.zip

Start training

$ python train.py --gpu_id 0 --data_dir path/to/vimeo_triplet/ --batch_size 8

It will generate an unique ID for each training, and all the intermediate results/records will be saved under model_weights/<training id>/ (or you can specify your experiment ID using --uid). For a GPU device with memory around 10GB, the --batch_size can take a value as large as 3, otherwise CUDA may be out of memory. There are many other training options, e.g., --lr, --epochs, --loss and so on, can be found in train.py.

Apply CDFI to New Models

One nice thing about CDFI is that the framework can be easily applied to other (heavy) DNN models and potentially boost their performance. The key to CDFI is the optimization-based compression that compresses a model via fine-grained pruning. In particular, we use the efficient and easy-to-use sparsity-inducing optimizer OBPROXSG (see also paper), and summarize the compression procedure for any other model in the following. For details, we recommend checking our long version of the paper at arXiv and the additional code at extended_version.

  • Copy the OBPROXSG optimizer, which is already implemented as torch.optim.optimizer, to your working directory
  • Starting from a pre-trained model, finetune its weights by using the OBPROXSG optimizer, like using any standard PyTorch built-in optimizer such as SGD or Adam
    • It is not necessarily to use the full dataset for this finetuning process
  • The parameters for the OBPROXSG optimizer
    • lr: learning rate
    • lambda_: coefficient of the L1 regularization term
    • epochSize: number of batches in a epoch
    • Np: number of proximal steps, which is set to be 2 for pruning AdaCoF
    • No: number of orthant steps (key step to promote sparsity), for which we recommend using the default setting
    • eps: threshold for trimming zeros, which is set to be 0.0001 for pruning AdaCoF
  • After the optimization is done (either by reaching a maximum number of epochs or achieving a high sparsity), use the layer density as the compression ratio for that layer, as described in the paper
  • As an example, compare the architectures in models/adacof.py and model/compressed_adacof.py for compressing AdaCoF with the above procedure

Now it's ready to make further improvements/modifications on the compressed model, based on the understanding of its flaws/drawbacks.

Citation

@article{ding2021cdfi,
  title={CDFI: Compression-Driven Network Design for Frame Interpolation},
  author={Ding, Tianyu and Liang, Luming and Zhu, Zhihui and Zharkov, Ilya},
  journal={arXiv preprint arXiv:2103.10559},
  year={2021}
}

or

@inproceedings{ding2021cdfi,
  title={CDFI: Compression-Driven Network Design for Frame Interpolation},
  author={Ding, Tianyu and Liang, Luming and Zhu, Zhihui and Zharkov, Ilya},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={8001--8011},
  year={2021}
}

Acknowledgements

The code is largely based on HyeongminLEE/AdaCoF-pytorch and baowenbo/DAIN.