ODMD Dataset
ODMD is the first dataset for learning Object Depth via Motion and Detection. ODMD training data are configurable and extensible, with each training example consisting of a series of object detection bounding boxes, camera movement distances, and ground truth object depth. As a benchmark evaluation, we provide four ODMD validation and test sets with 21,600 examples in multiple domains, and we also convert 15,650 examples from the ODMS benchmark for detection. In our paper, we use a single ODMD-trained network with object detection or segmentation to achieve state-of-the-art results on existing driving and robotics benchmarks and estimate object depth from a camera phone, demonstrating how ODMD is a viable tool for monocular depth estimation in a variety of mobile applications.
Contact: Brent Griffin (griffb at umich dot edu)
Depth results using a camera phone.
Using ODMD
Run ./demo/demo_datagen.py
to generate random ODMD data to train or test your model.
Example data generation and camera configurations are provided in the ./config/
folder.
demo_datagen.py
has the option to save data into a static dataset for repeated use.
[native Python]
Run ./demo/demo_dataset_eval.py
to evaluate your model on the ODMD validation and test sets.
demo_dataset_eval.py
has an example evaluation for the BoxLS baseline and instructions for using our detection-based version of ODMS.
Results are saved in the ./results/
folder.
[native Python]
Benchmark
Method | Normal | Perturb Camera | Perturb Detect | Robot | All |
---|---|---|---|---|---|
DBox | 1.73 | 2.45 | 2.54 | 11.17 | 4.47 |
DBoxAbs | 1.11 | 2.05 | 1.75 | 13.29 | 4.55 |
BoxLS | 0.00 | 4.47 | 21.60 | 21.23 | 11.83 |
Is your technique missing although it's published and the code is public? Let us know and we'll add it.
Using DBox Method
Run ./demo/demo_dataset_DBox_train.py
to train your own DBox model using ODMD.
Run ./demo/demo_dataset_DBox_eval.py
after training to evaluate your DBox model.
Example training and DBox model configurations are provided in the ./config/
folder.
Models are saved in the ./results/model/
folder.
demo_dataset_DBox_eval.py
also has instructions for using our pretrained DBox model.
[native Python, has Torch dependency]
Publication
Please cite our paper if you find it useful for your research.
@inproceedings{GrCoCVPR21,
author = {Griffin, Brent A. and Corso, Jason J.},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
title = {Depth from Camera Motion and Object Detection},
year = {2021}
}
CVPR 2021 presentation video: https://youtu.be/38Qqh6yYdVY
Use
This code is available for non-commercial research purposes only.