• Stars
    star
    216
  • Rank 183,179 (Top 4 %)
  • Language
    Python
  • Created over 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2022 Oral & TPAMI 2023] Learning Optical Flow and Scene Flow with Bidirectional Camera-LiDAR Fusion

CamLiFlow & CamLiRAFT

This is the official PyTorch implementation for our two papers:

δΈ­ζ–‡θ§£θ―»οΌšhttps://zhuanlan.zhihu.com/p/616384758

Changes to the Conference Paper

In this extended version, we instantiate a new type of the bidirectional fusion pipeline, the CamLiRAFT based on the recurrent all-pairs field transforms. CamLiRAFT obtains significant performance improvements over the original PWC-based CamLiFlow and sets a new state-of-the-art record on various datasets.

  • Comparison with stereo scene flow methods: On FlyingThings3D, CamLiRAFT achieves 1.73 EPE2D and 0.049 EPE3D, 21% and 20% lower error compared to CamLiFlow. On KITTI, even the non-rigid CamLiRAFT performs on par with the previous state-of-the-art method RigidMask (SF-all: 4.97% vs. 4.89%). By refining the background scene flow with rigid priors, CamLiRAFT further achieves an error of 4.26%, ranking first on the leaderboard.

  • Comparison with LiDAR-only scene flow methods: The LiDAR-only variant of our method, dubbed CamLiRAFT-L, also outperforms all previous LiDAR-only scene flow methods in terms of both accuracy and speed (see Tab. 5 in the paper). Thus, CamLiRAFT-L can also serve as a strong baseline for LiDAR-only scene flow estimation.

  • Comparison on MPI Sintel: Without finetuning on Sintel, CamLiRAFT achieves 2.38 AEPE on the final pass of the Sintel training set, reducing the error by 12% and 18% over RAFT and RAFT-3D respectively. This demonstrates that our method has good generalization performance and can handle non-rigid motion.

  • Training schedule: The original CamLiFlow requires a complicated training schedule of Things (L2 loss) -> Things (Robust loss) -> Driving -> KITTI and takes about 10 days to train. CamLiRAFT simplifies the schedule to Things -> KITTI, and the training only takes about 3 days. (Tested on 4x RTX 3090 GPUs)

News

  • 2023-09-20: We provide a demo for CamLiRAFT, see demo.py for more details.
  • 2023-03-22: We release CamLiRAFT, an extended version of CamLiFlow on https://arxiv.org/abs/2303.12017.
  • 2022-03-29: Our paper is selected for an oral presentation.
  • 2022-03-07: We release the code and the pretrained weights.
  • 2022-03-03: Our paper is accepted by CVPR 2022.
  • 2021-11-20: Our paper is available at https://arxiv.org/abs/2111.10502
  • 2021-11-04: Our method ranked first on the leaderboard of KITTI Scene Flow.

Pretrained Weights

Model Training set Weights Comments
CamLiRAFT Things (80e) camliraft_things80e.pt Best generalization performance
CamLiRAFT Things (150e) camliraft_things150e.pt Best performance on Things
CamLiRAFT Things (150e) -> KITTI (800e) camliraft_things150e_kitti800e.pt Best performance on KITTI

Precomputed Results

Here, we provide precomputed results for the submission to the online benchmark of KITTI Scene Flow. * denotes refining the background scene flow with rigid priors.

Model D1-all D2-all Fl-all SF-all Link
CamLiFlow 1.81% 3.19% 4.05% 5.62% camliflow-wo-refine.zip
CamLiFlow * 1.81% 2.95% 3.10% 4.43% camliflow.zip
CamLiRAFT 1.81% 3.02% 3.43% 4.97% camliraft-wo-refine.zip
CamLiRAFT * 1.81% 2.94% 2.96% 4.26% camliraft.zip

Environment

Create a PyTorch environment using conda:

conda create -n camliraft python=3.7
conda activate camliraft
conda install pytorch==1.10.2 torchvision==0.11.3 cudatoolkit=11.3 -c pytorch

Install mmcv and mmdet:

pip install openmim
mim install mmcv-full==1.4.0
mim install mmdet==2.14.0

Install other dependencies:

pip install opencv-python open3d tensorboard hydra-core==1.1.0

Compile CUDA extensions for faster training and evaluation:

cd models/csrc
python setup.py build_ext --inplace

Download the ResNet-50 pretrained on ImageNet-1k:

wget https://download.pytorch.org/models/resnet50-11ad3fa6.pth
mkdir pretrain
mv resnet50-11ad3fa6.pth pretrain/

NG-RANSAC is also required if you want to evaluate on KITTI. Please follow https://github.com/vislearn/ngransac to install the library.

Demo

Then, run the following script to launch a demo of estimating optical flow and scene flow from a pair of images and point clouds:

python demo.py --model camliraft --weights /path/to/camliraft/checkpoint.pt

Note that CamLiRAFT is not very robust to objects at a greater distance, as the network has only been trained on data with a depth of less than 35m. If you are getting bad results on your own data, try scaling the depth of the point cloud to a range of 5 ~ 35m.

Evaluation

FlyingThings3D

First, download and preprocess the dataset (see preprocess_flyingthings3d_subset.py for detailed instructions):

python preprocess_flyingthings3d_subset.py --input_dir /mnt/data/flyingthings3d_subset

Then, download the pretrained weights camliraft_things150e.pt and save it to checkpoints/camliraft_things150e.pt.

Now you can reproduce the results in Table 2 (see the extended paper):

python eval_things.py testset=flyingthings3d_subset model=camliraft ckpt.path=checkpoints/camliraft_things150e.pt

KITTI

First, download the following parts:

Unzip them and organize the directory as follows:

datasets/kitti_scene_flow
β”œβ”€β”€ testing
β”‚   β”œβ”€β”€ calib_cam_to_cam
β”‚   β”œβ”€β”€ calib_imu_to_velo
β”‚   β”œβ”€β”€ calib_velo_to_cam
β”‚   β”œβ”€β”€ disp_ganet
β”‚   β”œβ”€β”€ flow_occ
β”‚   β”œβ”€β”€ image_2
β”‚   β”œβ”€β”€ image_3
β”‚   β”œβ”€β”€ semantic_ddr
└── training
    β”œβ”€β”€ calib_cam_to_cam
    β”œβ”€β”€ calib_imu_to_velo
    β”œβ”€β”€ calib_velo_to_cam
    β”œβ”€β”€ disp_ganet
    β”œβ”€β”€ disp_occ_0
    β”œβ”€β”€ disp_occ_1
    β”œβ”€β”€ flow_occ
    β”œβ”€β”€ image_2
    β”œβ”€β”€ image_3
    β”œβ”€β”€ obj_map
    β”œβ”€β”€ semantic_ddr

Then, download the pretrained weights camliraft_things150e_kitti800e.pt and save it to checkpoints/camliraft_things150e_kitti800e.pt.

To reproduce the results without leveraging rigid-body assumptions (SF-all: 4.97%):

python kitti_submission.py testset=kitti model=camliraft ckpt.path=checkpoints/camliraft_things150e_kitti800e.pt

To reproduce the results with rigid background refinement (SF-all: 4.26%), you need to further refine the background scene flow:

python refine_background.py

Results are saved to submission/testing. The initial non-rigid estimations are indicated by the _initial suffix.

Sintel

First, download the flow dataset from: http://sintel.is.tue.mpg.de and the depth dataset from https://sintel-depth.csail.mit.edu/landing

Unzip them and organize the directory as follows:

datasets/sintel
β”œβ”€β”€ depth
β”‚Β Β  β”œβ”€β”€ README_depth.txt
β”‚Β Β  β”œβ”€β”€ sdk
β”‚Β Β  └── training
└── flow
    β”œβ”€β”€ bundler
    β”œβ”€β”€ flow_code
    β”œβ”€β”€ README.txt
    β”œβ”€β”€ test
    └── training

Then, download the pretrained weights camliraft_things80e.pt and save it to checkpoints/camliraft_things80e.pt.

Now you can reproduce the results in Table 4 (see the extended paper):

python eval_sintel.py testset=sintel model=camliraft ckpt.path=checkpoints/camliraft_things80e.pt

Training

FlyingThings3D

You need to preprocess the FlyingThings3D dataset before training (see preprocess_flyingthings3d_subset.py for detailed instructions).

Train CamLiRAFT on FlyingThings3D (150 epochs):

python train.py trainset=flyingthings3d_subset valset=flyingthings3d_subset model=camliraft

The entire training process takes about 3 days on 4x RTX 3090 GPUs.

KITTI

Finetune the model on KITTI using the weights trained on FlyingThings3D:

python train.py trainset=kitti valset=kitti model=camliraft ckpt.path=checkpoints/camliraft_things150e.pt

The entire training process takes about 0.5 days on 4x RTX 3090 GPUs. We use the last checkpoint (800th) to generate the submission.

Citation

If you find them useful in your research, please cite:

@article{liu2023learning,
  title   = {Learning Optical Flow and Scene Flow with Bidirectional Camera-LiDAR Fusion},
  author  = {Haisong Liu and Tao Lu and Yihui Xu and Jia Liu and Limin Wang},
  journal = {arXiv preprint arXiv:2303.12017},
  year    = {2023}
}

@inproceedings{liu2022camliflow,
  title     = {Camliflow: bidirectional camera-lidar fusion for joint optical flow and scene flow estimation},
  author    = {Liu, Haisong and Lu, Tao and Xu, Yihui and Liu, Jia and Li, Wenjie and Chen, Lijun},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages     = {5791--5801},
  year      = {2022}
}

More Repositories

1

VideoMAE

[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
Python
1,295
star
2

MixFormer

[CVPR 2022 Oral & TPAMI 2024] MixFormer: End-to-End Tracking with Iterative Mixed Attention
Python
445
star
3

TDN

[CVPR 2021] TDN: Temporal Difference Networks for Efficient Action Recognition
Python
366
star
4

EMA-VFI

[CVPR 2023] Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolatio
Python
339
star
5

SparseBEV

[ICCV 2023] SparseBEV: High-Performance Sparse 3D Object Detection from Multi-Camera Videos
Python
328
star
6

MOC-Detector

[ECCV 2020] Actions as Moving Points
Python
264
star
7

AdaMixer

[CVPR 2022 Oral] AdaMixer: A Fast-Converging Query-Based Object Detector
Jupyter Notebook
236
star
8

SparseOcc

[ECCV 2024] Fully Sparse 3D Occupancy Prediction & RayIoU Evaluation Metric
Python
199
star
9

MeMOTR

[ICCV 2023] MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking
Python
141
star
10

MixFormerV2

[NeurIPS 2023] MixFormerV2: Efficient Fully Transformer Tracking
Python
136
star
11

SportsMOT

[ICCV 2023] SportsMOT: A Large Multi-Object Tracking Dataset in Multiple Sports Scenes
Python
133
star
12

SADRNet

[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
Python
126
star
13

MultiSports

[ICCV 2021] MultiSports: A Multi-Person Video Dataset of Spatio-Temporally Localized Sports Actions
Python
106
star
14

FCOT

[CVIU] Fully Convolutional Online Tracking
Python
91
star
15

MMN

[AAAI 2022] Negative Sample Matters: A Renaissance of Metric Learning for Temporal Grounding
Python
88
star
16

RTD-Action

[ICCV 2021] Relaxed Transformer Decoders for Direct Action Proposal Generation
Python
86
star
17

MOTIP

Multiple Object Tracking as ID Prediction
Python
84
star
18

BCN

[ECCV 2020] Boundary-Aware Cascade Networks for Temporal Action Segmentation
Python
84
star
19

LinK

[CVPR 2023] LinK: Linear Kernel for LiDAR-based 3D Perception
Python
81
star
20

MixSort

[ICCV2023] MixSort: The Customized Tracker in SportsMOT
Python
69
star
21

CPD-Video

Learning Spatiotemporal Features via Video and Text Pair Discrimination
Python
60
star
22

SGM-VFI

[CVPR 2024] Sparse Global Matching for Video Frame Interpolation with Large Motion
Python
59
star
23

Structured-Sparse-RCNN

[CVPR 2022] Structured Sparse R-CNN for Direct Scene Graph Generation
Jupyter Notebook
57
star
24

TRACE

[ICCV 2021] Target Adaptive Context Aggregation for Video Scene Graph Generation
Python
57
star
25

CRCNN-Action

Context-aware RCNN: a Baseline for Action Detection in Videos
Python
53
star
26

STMixer

[CVPR 2023] STMixer: A One-Stage Sparse Action Detector
Python
49
star
27

BasicTAD

BasicTAD: an Astounding RGB-Only Baselinefor Temporal Action Detection
Python
48
star
28

DDM

[CVPR 2022] Progressive Attention on Multi-Level Dense Difference Maps for Generic Event Boundary Detection
Python
48
star
29

VideoMAE-Action-Detection

[NeurIPS 2022 Spotlight] VideoMAE for Action Detection
Python
47
star
30

MGSampler

[ICCV 2021] MGSampler: An Explainable Sampling Strategy for Video Action Recognition
Python
46
star
31

FSL-Video

[BMVC 2021] A Closer Look at Few-Shot Video Classification: A New Baseline and Benchmark
Python
39
star
32

BIVDiff

[CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models
Python
39
star
33

PointTAD

[NeurIPS 2022] PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points
Python
37
star
34

TemporalPerceiver

[T-PAMI 2023] Temporal Perceiver: A General Architecture for Arbitrary Boundary Detection
Python
34
star
35

TIA

[CVPR 2022] Task-specific Inconsistency Alignment for Domain Adaptive Object Detection
Python
33
star
36

CoMAE

[AAAI 2023] CoMAE: Single Model Hybrid Pre-training on Small-Scale RGB-D Datasets
Python
31
star
37

PDPP

[CVPR 2023 Hightlight] PDPP: Projected Diffusion for Procedure Planning in Instructional Videos
Python
27
star
38

JoMoLD

[ECCV 2022] Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video Parsing
Python
27
star
39

EVAD

[ICCV 2023] Efficient Video Action Detection with Token Dropout and Context Refinement
Python
24
star
40

CGA-Net

[CVPR 2021] CGA-Net: Category Guided Aggregation for Point Cloud Semantic Segmentation
Python
23
star
41

SSD-LT

[ICCV 2021] Self Supervision to Distillation for Long-Tailed Visual Recognition
Python
22
star
42

TREG

Target Transformed Regression for Accurate Tracking
Python
21
star
43

VFIMamba

VFIMamba: Video Frame Interpolation with State Space Models
Python
21
star
44

DEQDet

[ICCV 2023] Deep Equilibrium Object Detection
Jupyter Notebook
20
star
45

MGMAE

[ICCV 2023] MGMAE: Motion Guided Masking for Video Masked Autoencoding
Python
19
star
46

OCSampler

[CVPR 2022] OCSampler: Compressing Videos to One Clip with Single-step Sampling
Python
17
star
47

SportsHHI

[CVPR 2024] SportsHHI: A Dataset for Human-Human Interaction Detection in Sports Videos
Python
11
star
48

APP-Net

[TIP] APP-Net: Auxiliary-point-based Push and Pull Operations for Efficient Point Cloud Recognition
Python
11
star
49

AMD

[CVPR 2024] Asymmetric Masked Distillation for Pre-Training Small Foundation Models
Python
11
star
50

StageInteractor

[ICCV 2023] StageInteractor: Query-based Object Detector with Cross-stage Interaction
Python
9
star
51

SPLAM

[ECCV 2024 Oral] SPLAM: Accelerating Image Generation with Sub-path Linear Approximation Model
Python
9
star
52

CMPT

[IJCV 2021] Cross-Modal Pyramid Translation for RGB-D Scene Recognition
Python
8
star
53

VLG

VLG: General Video Recognition with Web Textual Knowledge (https://arxiv.org/abs/2212.01638)
Python
8
star
54

DGN

[IJCV 2023] Dual Graph Networks for Pose Estimation in Crowded Scenes
Python
7
star
55

Dynamic-MDETR

[TPAMI 2024] Dynamic MDETR: A Dynamic Multimodal Transformer Decoder for Visual Grounding
Python
7
star
56

BFRNet

Python
6
star
57

ViT-TAD

[CVPR 2024] Adapting Short-Term Transformers for Action Detection in Untrimmed Videos
Python
6
star
58

VideoEval

VideoEval: Comprehensive Benchmark Suite for Low-Cost Evaluation of Video Foundation Model
Python
6
star
59

ZeroI2V

[ECCV 2024] ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video
Python
5
star
60

PRVG

[CVIU 2024] End-to-end dense video grounding via parallel regression
Python
5
star
61

LogN

[IJCV 2024] Logit Normalization for Long-Tail Object Detection
Python
4
star