Learning Guided Convolutional Network for Depth Completion.
Introduction
This is the pytorch implementation of our paper.
Dependency
PyTorch 1.4
PyTorch-Encoding v1.4.0
Setup
Compile the C++ and CUDA code:
cd exts
python setup.py install
Dataset
Please download KITTI depth completion dataset. The structure of data directory:
βββ datas
βββ kitti
βββ data_depth_annotated
βΒ Β βββ train
βΒ Β βββ val
βββ data_depth_velodyne
βΒ Β βββ train
βΒ Β βββ val
βββ raw
βΒ Β βββ 2011_09_26
βΒ Β βββ 2011_09_28
βΒ Β βββ 2011_09_29
βΒ Β βββ 2011_09_30
βΒ Β βββ 2011_10_03
βββ test_depth_completion_anonymous
βΒ Β βββ image
βΒ Β βββ intrinsics
βΒ Β βββ velodyne_raw
βββ val_selection_cropped
βββ groundtruth_depth
βββ image
βββ intrinsics
βββ velodyne_raw
Configs
The config of different settings:
- GN.yaml
- GNS.yaml
Compared to GN, GNS uses fewer parameters to generate the guided kernels, but achieves slightly better results.
Trained Models
You can directly download the trained model and put it in checkpoints:
Train
You can also train by yourself:
python train.py
Pay attention to the settings in the config file (e.g. gpu id).
Test
With the trained model, you can test and save depth images.
python test.py
Citation
If you find this work useful in your research, please consider citing:
@article{guidenet,
title={Learning guided convolutional network for depth completion},
author={Tang, Jie and Tian, Fei-Peng and Feng, Wei and Li, Jian and Tan, Ping},
journal={IEEE Transactions on Image Processing},
volume={30},
pages={1116--1129},
year={2020},
publisher={IEEE}
}