• Stars
    star
    279
  • Rank 147,967 (Top 3 %)
  • Language
    Java
  • License
    MIT License
  • Created over 6 years ago
  • Updated about 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Real-Time Video Segmentation on Mobile Devices with DeepLab V3+, MobileNet V2. Worked on the project in 🏝 Jeju island

JejuNet

Real-Time Video Segmentation on Mobile Devices

Keywords

Video Segmentation, Mobile, Tensorflow Lite

Tutorials
  • Benchmarks: Tensorflow Lite on GPU
    • A Post on Medium Link
    • Detail results Link

Introduction

Running vision tasks such as object detection, segmentation in real time on mobile devices. Our goal is to implement video segmentation in real time at least 24 fps on Google Pixel 2. We use efficient deep learning network specialized in mobile/embedded devices and exploit data redundancy between consecutive frames to reduce unaffordable computational cost. Moreover, the network can be optimized with 8-bits quantization provided by tf-lite.

Real-Time Video Segmentation(Credit: Google AI)

Example: Reai-Time Video Segmentation(Credit: Google AI)

Architecture

Video Segmentation

Optimization

Experiments

  • Video Segmentation on Google Pixel 2
  • Datasets
    • PASCAL VOC 2012

Plan @Deep Learning Camp Jeju 2018

July, 2018

  • DeepLabv3+ on tf-lite
  • Use data redundancy between frames
  • Optimization
    • Quantization
    • Reduce the number of layers, filters and input size

Results

More results here bit.ly/jejunet-output

Demo

DeepLabv3+ on tf-lite

Video Segmentation on Google Pixel 2

Trade-off Between Speed(FPS) and Accuracy(mIoU)

Trade-off Between Speed(FPS) and Accuracy(mIoU)

Low Bits Quantization

Network Input Stride Quantization(w/a) PASCAL mIoU Runtime(.tflite) File Size(.tflite)
DeepLabv3, MobileNetv2 512x512 16 32/32 79.9% 862ms 8.5MB
DeepLabv3, MobileNetv2 512x512 16 8/8 79.2% 451ms 2.2MB
DeepLabv3, MobileNetv2 512x512 16 6/6 70.7% - -
DeepLabv3, MobileNetv2 512x512 16 6/4 30.3% - -

Low Bits Quantization

References

  1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

    Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam. arXiv: 1802.02611.

    [link]. arXiv: 1802.02611, 2018.

  2. Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
    Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
    [link]. arXiv:1801.04381, 2018.

Authors

Acknowledgement

This work was partially supported by Deep Learning Jeju Camp and sponsors such as Google, SK Telecom. Thank you for the generous support for TPU and Google Pixel 2, and thank Hyungsuk and all the mentees for tensorflow impelmentations and useful discussions.

License

Š Taekmin Kim, 2018. Licensed under the MIT License.