Video Frame Synthesis using Deep Voxel Flow
We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. Deep Voxel Flow (DVF)
is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-of-the-art.
Note: we encourage you to check out the newly released pytorch-voxel-flow. Please contact Dr. Xiaoxiao Li ([email protected]) for the pre-trained models of "Deep Voxel Flow".
Other Implementations
Overview
Deep Voxel Flow (DVF)
is the author's re-implementation of the video frame synthesizer described in:
"Video Frame Synthesis using Deep Voxel Flow"
Ziwei Liu, Raymond A. Yeh, Xiaoou Tang, Yiming Liu, Aseem Agarwala (CUHK & UIUC & Google Research)
in International Conference on Computer Vision (ICCV) 2017, Oral Presentation
Further information please contact Ziwei Liu.
Requirements
Data Preparation
- Train/Test Split
- Training Data: extract frame triplets from UCF101 with obvious motion.
- Testing Data
- Motion Masks
Getting started
- Run the training script:
python voxel_flow_train.py --subset=train
- Run the testing script:
python voxel_flow_train.py --subset=test
- Run the evaluation script:
matlab eval_voxelflow.m
License and Citation
The use of this software is RESTRICTED to non-commercial research and educational purposes.
@inproceedings{liu2017voxelflow,
author = {Ziwei Liu, Raymond Yeh, Xiaoou Tang, Yiming Liu, and Aseem Agarwala},
title = {Video Frame Synthesis using Deep Voxel Flow},
booktitle = {Proceedings of International Conference on Computer Vision (ICCV)},
month = {October},
year = {2017}
}
Disclaimer
This is not an official Google product.