PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes
Created by Yu Xiang at RSE-Lab at University of Washington and NVIDIA Research.
Introduction
We introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. arXiv, Project
License
PoseCNN is released under the MIT License (refer to the LICENSE file for details).
Citation
If you find PoseCNN useful in your research, please consider citing:
@inproceedings{xiang2018posecnn,
Author = {Xiang, Yu and Schmidt, Tanner and Narayanan, Venkatraman and Fox, Dieter},
Title = {PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes},
Journal = {Robotics: Science and Systems (RSS)},
Year = {2018}
}
Installation
-
Install TensorFlow. I usually compile the source code of tensorflow locally.
-
Compile the new layers under $ROOT/lib we introduce in PoseCNN.
cd $ROOT/lib sh make.sh
-
Download the VGG16 weights from here (528M). Put the weight file vgg16.npy to $ROOT/data/imagenet_models.
-
Compile lib/synthesize with cmake (optional). This package contains a few useful tools such as generating synthetic images for training and ICP.
Install dependencies:
We use Boost.Python library to link tensorflow with the c++ code. Make sure you have it in your Boost. The tested Boost version is 1.66.0.
Change hard coded pathes in CMakeLists.txt.
The Pangolin branch I use: c2a6ef524401945b493f14f8b5b8aa76cc7d71a9
cd $ROOT/lib/synthesize mkdir build cd build cmake .. make
Add the path of the built libary libsynthesizer.so to python path
export PYTHONPATH=$PYTHONPATH:$ROOT/lib/synthesize/build
Required environment
- Ubuntu 16.04
- Tensorflow >= 1.2.0
- CUDA >= 8.0
Running the demo
-
Download our trained model on the YCB-Video dataset from here, and save it to $ROOT/data/demo_models.
-
run the following script
./experiments/scripts/demo.sh $GPU_ID
Running on the YCB-Video dataset
-
Download the YCB-Video dataset from here.
-
Create a symlink for the YCB-Video dataset (the name LOV is due to legacy, Learning Objects from Videos)
cd $ROOT/data/LOV ln -s $ycb_data data ln -s $ycb_models models
-
Training and testing on the YCB-Video dataset
cd $ROOT # training ./experiments/scripts/lov_color_2d_train.sh $GPU_ID # testing ./experiments/scripts/lov_color_2d_test.sh $GPU_ID