• Stars
    star
    296
  • Rank 140,464 (Top 3 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Enforcing temporal consistency in real-time per-frame semantic video segmentation

ECCV2020: Efficient Semantic Video Segmentation with Per-frame Inference

In semantic segmentation, most existing real-time deep models trained with each frame independently may produce inconsistent results for a video sequence. Advanced methods take into considerations the correlations in the video sequence, e.g., by propagating the results to the neighboring frames using optical flow, or extracting the frame representations with other frames, which may lead to inaccurate results or unbalanced latency. In this work, we process efficient semantic video segmentation in a per-frame fashion during the inference process.

Different from previous per-frame models, we explicitly consider the temporal consistency among frames as extra constraints during the training process and embed the temporal consistency into the segmentation network. Therefore, in the inference process, we can process each frame independently with no latency, and improve the temporal consistency with no extra computational cost and post-processing. We employ compact models for real-time execution. To narrow the performance gap between compact models and large models, new knowledge distillation methods are designed. Our results outperform previous keyframe based methods with a better trade-off between the accuracy and the inference speed on popular benchmarks, including the Cityscapes and Camvid. The temporal consistency is also improved compared with corresponding baselines which are trained with each frame independently.

This repository contains the demo evaluate code and the training scripts for motion loss of our paper (ECCV2020) Efficient Semantic Video Segmentation with Per-frame Inference.

Update

We generated new pseudo labels with test-time augmentation on video sequence in data-coarse. Besides, we filter out the regions with low confidence. Training with the provided dataset, most models can achieve a better results with mIoU and temporal consistency. The pseudo data can be downloaded from this link.

Sample results

Demo video for the PSPnet-18 on Cityscapes

ETC with mIoU 73.1, temporal consistncy 70.56 vs baseline with mIoU 69.79, temporal consistncy 68.50:

image

image

Performance on the Cityscape dataset

We employ the temporal loss the temporal knowledge distillation methods to adapte single frame image segmentation methods for semantic video segmentation methods.

Model mIoU Temporal consitency
baseline 69.79 68.50
+temporal loss 71.72 69.99
+temporal loss + distillation 73.06 70.56

Note: Other chcekpoints can be obtained by email: [email protected] if needed.

Requirement

python3.5

pytorch >1.0.0

We recommend to use Anaconda.

We have tested our code on Ubuntu 16.04.

  • The flownet need to be compiled following FlowNetV2
  • You can first clone the FlowNetV2, and compile it. -Note that your cuda version [nvcc -V] should be the same as your torch cuda versio [torch.version.cuda]
  • Then copy the folder of flownet2-pytorch/networks/resample2d_package,correlation_package,channelnorm_package to OURS/flownet/
  • Download the weight of the flownet, and place it in OURS/pretrained_model/ To train the model, you also need to install apex.

Quick start to train the model

  1. python tool/train_with_flow.py.

Quick start to test the model

  1. download the Cityscape dataset
  2. python tool/demo.py.

Evaluation the Temporal Consistency

To evaluate the temporal consistency, you need to install the flownet first.

  1. You need to download the video data of the cityscapes: leftImg8bit_sequence_trainvaltest.zip
  2. The download data should be placed in data/cityscapes/leftImg8bit/
  3. Generate the results for the sampled frames, which need to be evaluated: python tool/gen_video.py
  4. Evaluate the temporal consistency based on the warpping mIoU: python tool/eval_tc.py

Note that the first time you evaluate the TC, the code will save the flow automatically.

  • In our paper, we random sample ~20% of the validation set for testing the TC for all models for efficiency (lists are in 'data/list/cityscapes/val_sam').
  • If you want to evaluate with all the validation video clips, you can relpace the 'data/list/cityscapes/val_video_img_sam.lst' with 'data/list/cityscapes/val_video_img.lst', and replace the 'data/list/cityscapes/val_sam' with 'data/list/cityscapes/val'. The trendency of the TC are similar.

Please change the ckpt_path in config to compare the results with baseline models

Train script

We only release the training code for the motion loss. If you are interested in the temporal knowledge distillation, you can refer to the ./utils/crit.py. All the distillation loss are included. The distillation method is a plug-in method, which can be applied to different teacher-student network pairs.

Acknowledgments

The test code borrows from semseg.

If you find this code useful, please cite:

@article{liu2020efficient,
  title={Efficient Semantic Video Segmentation with Per-frame Inference},
  author={Liu, Yifan and Shen, Chunhua and Yu, Changqian and Wang, Jingdong},
  journal={ECCV},
  year={2020}
}

More Repositories

1

structure_knowledge_distillation

The official code for the paper 'Structured Knowledge Distillation for Semantic Segmentation'. (CVPR 2019 ORAL) and extension to other tasks.
Python
699
star
2

CoupleGenerator

Generate your lover with your photo
Python
459
star
3

TorchDistiller

Python
192
star
4

Auto_painter

Recently, realistic image generation using deep neural networks has become a hot topic in machine learning and computer vision. Such an image can be generated at pixel level by learning from a large collection of images. Learning to generate colorful cartoon images from black-and-white sketches is not only an interesting research problem, but also a useful application in digital entertainment. In this paper, we investigate the sketch-to-image synthesis problem by using conditional generative adversarial networks (cGAN). We propose a model called auto-painter which can automatically generate compatible colors given a sketch. Wasserstein distance is used in training cGAN to overcome model collapse and enable the model converged much better. The new model is not only capable of painting hand-draw sketch with compatible colors, but also allowing users to indicate preferred colors. Experimental results on different sketch datasets show that the auto-painter performs better than other existing image-to-image methods.
Python
132
star
5

EMM-for-stock-prediction

We propose a model to analyze sentiment of online stock forum and use the information to predict stock volatility in the Chinese market. By generating a sentimental dictionary, we analyze the sentimental tendencies of each post as sentiment indicators. Such sentimental information will be fused with market data for prediction based on Recurrent Neural Networks (RNNs). We manually labeled the sentiment of forum post and make the data public available for research. Empirical evidence shows that 8 of the 10 stocks perform better with sentimental indicators.
Python
62
star
6

Auto_painter_demo

The code of building a web demo for Auto_painter
JavaScript
27
star
7

SSIW

The code of 'The devil is in the labels: Semantic segmentation from sentences'.
Python
13
star
8

inceptionV2_finetune

Fine-tuning of inceptionV2 on CUB-200 Birds dataset in Tensorflow
Python
9
star
9

stock_predict

This project predicts stock trends on the basis of online user comments and LSTM
Python
5
star
10

colorization

reading note
3
star
11

horseSeg

raw_code
Python
1
star
12

dvn

dvn for semantic segmentation
1
star