• Stars
    star
    294
  • Rank 141,303 (Top 3 %)
  • Language
    C++
  • License
    MIT License
  • Created over 4 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Extracting optical flow and frames

Denseflow

Extracting dense flow field given a video.

Features

  • support multiple optical flow algorithms, including Nvidia hardware optical flow
  • support single video (or a frame folder) / a list of videos (or a list of frame folders) as input
  • support multiple output types (image, hdf5)
  • faster, 40% faster (by parallelize IO & computation)
  • record the progress when extract a list of videos, and resume by simply running the same command again (idempotent)

Install

Dependencies:

  • CUDA (driver version > 400)
  • OpenCV (with CUDA support): opencv3 | opencv4
  • Boost
  • HDF5 (Optional)
git clone https://github.com/open-mmlab/denseflow.git
cd denseflow && mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX=$HOME/app -DUSE_HDF5=no -DUSE_NVFLOW=no ..
make -j
make install

If you have trouble setting up building environments, scripts in INSTALL might be helpful.

Usage

Extract optical flow of a single video

denseflow test.avi -b=20 -a=tvl1 -s=1 -v
  • test.avi: input video
  • -b=20 bound set to 20
  • -a=tvl1 algorithm is tvl1
  • -s=1 step is 1, ie flow of adjacent frames
  • -v: verbose

Extract optical flow of a list of videos

denseflow videolist.txt -b=20 -a=tvl1 -s=1 -v
  • videolist.txt: a list of video paths
  • -b=20 bound set to 20
  • -a=tvl1 algorithm is tvl1
  • -s=1 step is 1, ie flow of adjacent frames
  • -v: verbose

Extract optical flow of a list of videos, each video is under a class folder

denseflow videolist.txt -b=20 -a=tvl1 -s=1 -cf -v
  • videolist.txt: a list of video paths
  • -b=20 bound set to 20
  • -a=tvl1 algorithm is tvl1
  • -s=1 step is 1, ie flow of adjacent frames
  • -cf this switch means that parent folder of the video is a class name
  • -v: verbose

Extract optical flow of a folder of frame images

denseflow test -b=20 -a=tvl1 -s=1 -if -v
  • test: folder of the frame images
  • -b=20 bound set to 20
  • -a=tvl1 algorithm is tvl1
  • -s=1 step is 1, ie flow of adjacent frames
  • -if indicates that inputs are frames
  • -v: verbose

Extract frames of a single video

denseflow test.avi -s=0 -v
  • test.avi: input video
  • -s=0 step 0 is reserved for extracting frames
  • -v: verbose

Extract frames of a list of videos

denseflow videolist.txt -s=0 -v
  • videolist.txt: a list of video paths
  • -s=1 step is 1, ie flow of adjacent frames
  • -s=0 step 0 is reserved for extracting frames
  • -v: verbose

Documentation

$ denseflow -h
GPU optical flow extraction.
Usage: denseflow [params] input

        -a, --algorithm (value:tvl1)
                optical flow algorithm (nv/tvl1/farn/brox)
        -b, --bound (value:32)
                maximum of optical flow
        --cf, --classFolder
                outputDir/class/video/flow.jpg
        -f, --force
                regardless of the marked .done file
        -h, --help (value:true)
                print help message
        --if, --inputFrames
                inputs are frames
        --newHeight, --nh (value:0)
                new height
        --newShort, --ns (value:0)
                short side length
        --newWidth, --nw (value:0)
                new width
        -o, --outputDir (value:.)
                root dir of output
        -s, --step (value:0)
                right - left (0 for img, non-0 for flow)
        --saveType, --st (value:jpg)
                save format type (png/h5/jpg)
        -v, --verbose
                verbose

        input
                filename of video or folder of frames or a list.txt of those

Citation

If you use this tool in your research, please cite this project.

@misc{denseflow,
  author =       {Wang, Shiguang* and Li, Zhizhong* and Zhao, Yue and Xiong, Yuanjun and Wang, Limin and Lin, Dahua},
  title =        {{denseflow}},
  howpublished = {\url{https://github.com/open-mmlab/denseflow}},
  year =         {2020}
}

Acknowledgement

Rewritten based on yuanjun's fork of dense_flow.

More Repositories

1

mmdetection

OpenMMLab Detection Toolbox and Benchmark
Python
29,487
star
2

mmsegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Python
7,992
star
3

mmagic

OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
Jupyter Notebook
6,909
star
4

mmcv

OpenMMLab Computer Vision Foundation
Python
5,879
star
5

mmpose

OpenMMLab Pose Estimation Toolbox and Benchmark.
Python
5,625
star
6

Amphion

Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.
Python
5,482
star
7

mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
Python
5,216
star
8

OpenPCDet

OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Python
4,658
star
9

mmocr

OpenMMLab Text Detection, Recognition and Understanding Toolbox
Python
4,270
star
10

mmaction2

OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
Python
4,236
star
11

mmtracking

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
Python
3,538
star
12

mmpretrain

OpenMMLab Pre-training Toolbox and Benchmark
Python
3,383
star
13

mmselfsup

OpenMMLab Self-Supervised Learning Toolbox and Benchmark
Python
3,182
star
14

mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.
Python
2,967
star
15

mmskeleton

A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
Python
2,928
star
16

mmdeploy

OpenMMLab Model Deployment Framework
Python
2,744
star
17

mmgeneration

MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.
Python
1,881
star
18

mmaction

An open-source toolbox for action understanding based on PyTorch
Python
1,853
star
19

mmrotate

OpenMMLab Rotated Object Detection Toolbox and Benchmark
Python
1,843
star
20

mmrazor

OpenMMLab Model Compression Toolbox and Benchmark.
Python
1,470
star
21

Multimodal-GPT

Multimodal-GPT
Python
1,461
star
22

mmfashion

Open-source toolbox for visual fashion analysis based on PyTorch
Python
1,245
star
23

mmhuman3d

OpenMMLab 3D Human Parametric Model Toolbox and Benchmark
Python
1,232
star
24

mmengine

OpenMMLab Foundational Library for Training Deep Learning Models
Python
1,161
star
25

playground

A central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.
Python
1,117
star
26

OpenMMLabCourse

OpenMMLab course index and stuff
Jupyter Notebook
1,000
star
27

mmflow

OpenMMLab optical flow toolbox and benchmark
Python
942
star
28

PIA

[CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos. PIA,你的个性化图像动画生成器,利用文本提示将图像变为奇妙的动画
Python
867
star
29

mmfewshot

OpenMMLab FewShot Learning Toolbox and Benchmark
Python
697
star
30

PowerPaint

[ECCV 2024] PowerPaint, a versatile image inpainting model that supports text-guided object inpainting, object removal, image outpainting and shape-guided object inpainting with only a single model. 一个高质量多功能的图像修补模型,可以同时支持插入物体、移除物体、图像扩展、形状可控的物体生成,只需要一个模型
Python
526
star
31

awesome-vit

400
star
32

OpenUnReID

PyTorch open-source toolbox for unsupervised or domain adaptive object re-ID.
Python
393
star
33

labelbee-client

Out-of-the-box Annotation Toolbox
JavaScript
380
star
34

FoleyCrafter

FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝
Python
379
star
35

mim

MIM Installs OpenMMLab Packages
Python
346
star
36

mmeval

A unified evaluation library for multiple machine learning libraries
Python
254
star
37

MMGEN-FaceStylor

Python
249
star
38

labelbee

LabelBee is an annotation Library
TypeScript
244
star
39

Live2Diff

Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.
Python
150
star
40

OpenMMLabCamp

Jupyter Notebook
93
star
41

polynet

The Github Repo for PolyNet
77
star
42

CLUE

C++ Lightweight Utility Extensions
C++
70
star
43

AnyControl

[ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控制信号的图像生成模型,能够根据多种控制生成自然和谐的结果!
Python
66
star
44

StyleShot

StyleShot: A SnapShot on Any Style. 一款可以迁移任意风格到任意内容的模型,无需针对图片微调,即能生成高质量的个性风格化图片!
Python
59
star
45

mim-example

Python
58
star
46

mmengine-template

Python
49
star
47

ecosystem

37
star
48

mmstyles

Latex style file to facilitate writing of technical papers
TeX
37
star
49

mmpose-webcam-demo

Python
25
star
50

pre-commit-hooks

Python
17
star
51

mdformat-openmmlab

Python
6
star
52

.github

4
star