• Stars
    star
    289
  • Rank 143,419 (Top 3 %)
  • Language
    Jupyter Notebook
  • Created almost 3 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Pytorch code for ICRA'22 paper: "Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation"

CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation

License: MIT PWC

This repository is the pytorch implementation of our paper:

CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation
Muhammad Zubair Irshad, Thomas Kollar, Michael Laskey, Kevin Stone, Zsolt Kira
International Conference on Robotics and Automation (ICRA), 2022

[Project Page] [arXiv] [PDF] [Video] [Poster]

Explore CenterSnap in Colab

Follow-up ECCV'22 work:

ShAPO: Implicit Representations for Multi-Object Shape, Appearance and Pose Optimization
Muhammad Zubair Irshad, Sergey Zakharov, Rares Ambrus, Thomas Kollar, Zsolt Kira, Adrien Gaidon
European Conference on Computer Vision (ECCV), 2022

[Project Page] [arXiv] [PDF] [Video] [Poster]

Citation

If you find this repository useful, please consider citing:

@inproceedings{irshad2022centersnap,
  title={CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation},
  author={Muhammad Zubair Irshad and Thomas Kollar and Michael Laskey and Kevin Stone and Zsolt Kira},
  journal={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2022},
  url={https://arxiv.org/abs/2203.01929},
}

@inproceedings{irshad2022shapo,
  title={ShAPO: Implicit Representations for Multi Object Shape Appearance and Pose Optimization},
  author={Muhammad Zubair Irshad and Sergey Zakharov and Rares Ambrus and Thomas Kollar and Zsolt Kira and Adrien Gaidon},
  journal={European Conference on Computer Vision (ECCV)},
  year={2022},
  url={https://arxiv.org/abs/2207.13691},
}

Contents

๐Ÿ’ป Environment

Create a python 3.8 virtual environment and install requirements:

cd $CenterSnap_Repo
conda create -y --prefix ./env python=3.8
conda activate ./env/
./env/bin/python -m pip install --upgrade pip
./env/bin/python -m pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html

The code was built and tested on cuda 10.2

๐Ÿ“Š Dataset

New Update: Please checkout the distributed script of our new ECCV'22 work ShAPO if you'd like to collect your own data from scratch in a couple of hours. That distributed script collects the data in the same format as required by CenterSnap, although with a few minor modications as mentioned in that repo.

  1. Download pre-processed dataset

We recommend downloading the preprocessed dataset to train and evaluate CenterSnap model. Download and untar Synthetic (868GB) and Real (70GB) datasets. These files contains all the training and validation you need to replicate our results.

cd $CenterSnap_REPO/data
wget https://tri-robotics-public.s3.amazonaws.com/centersnap/CAMERA.tar.gz
tar -xzvf CAMERA.tar.gz

wget https://tri-robotics-public.s3.amazonaws.com/centersnap/Real.tar.gz
tar -xzvf Real.tar.gz

The data directory structure should follow:

data
โ”œโ”€โ”€ CAMERA
โ”‚   โ”œโ”€โ”€ train
โ”‚   โ””โ”€โ”€ val_subset
โ”œโ”€โ”€ Real
โ”‚   โ”œโ”€โ”€ train
โ””โ”€โ”€ โ””โ”€โ”€ test
  1. To prepare your own dataset, we provide additional scripts under prepare_data.

โœจ Training and Inference

  1. Train on NOCS Synthetic (requires 13GB GPU memory):
./runner.sh net_train.py @configs/net_config.txt

Note than runner.sh is equivalent to using python to run the script. Additionally it sets up the PYTHONPATH and CenterSnap Enviornment Path automatically.

  1. Finetune on NOCS Real Train (Note that good results can be obtained after finetuning on the Real train set for only a few epochs i.e. 1-5):
./runner.sh net_train.py @configs/net_config_real_resume.txt --checkpoint \path\to\best\checkpoint
  1. Inference on a NOCS Real Test Subset

Download a small NOCS Real subset from [here]

./runner.sh inference/inference_real.py @configs/net_config.txt --data_dir path_to_nocs_test_subset --checkpoint checkpoint_path_here

You should see the visualizations saved in results/CenterSnap. Change the --ouput_path in *config.txt to save them to a different folder

  1. Optional (Shape Auto-Encoder Pre-training)

We provide pretrained model for shape auto-encoder to be used for data collection and inference. Although our codebase doesn't require separately training the shape auto-encoder, if you would like to do so, we provide additional scripts under external/shape_pretraining

๐Ÿ“ FAQ

1. I am not getting good performance on my custom camera images i.e. Realsense, OAK-D or others.

2. How to generate good zero-shot results on HSR robot camera:

  • Ans: Please see the answer to FAQ1 above for best results. An alternate solution that we employed to do a quick demo on HSR robot is to warp the rgb-d observarions coming out of HSR robot camera or any other custom camera such that they match the intrinsics of the NOCS real camera (which we finetune our model on). This way one can get decent results with only finetuning on NOCS real dataset. Please see this answer and the corresponding gist here for the code.

3. I am getting no cuda GPUs available while running colab.

  • Ans: Make sure to follow this instruction to activate GPUs in colab:
Make sure that you have enabled the GPU under Runtime-> Change runtime type!

4. I am getting raise RuntimeError('received %d items of ancdata' % RuntimeError: received 0 items of ancdata

  • Ans: Increase ulimit to 2048 or 8096 via uimit -n 2048

5. I am getting RuntimeError: CUDA error: no kernel image is available for execution on the device or You requested GPUs: [0] But your machine only has: []

  • Ans: Check your pytorch installation with your cuda installation. Try the following:
  1. Installing cuda 10.2 and running the same script in requirements.txt

  2. Installing the relevant pytorch cuda version i.e. changing this line in the requirements.txt

torch==1.7.1
torchvision==0.8.2

6. I am seeing zero val metrics in wandb

  • Ans: Make sure you threshold the metrics. Since pytorch lightning's first validation check metric is high, it seems like all other metrics are zero. Please threshold manually to remove the outlier metric in wandb to see actual metrics.

Follow-up-works

Acknowledgments

  • This code is built upon the implementation from SimNet

Licenses

More Repositories

1

Awesome-Implicit-NeRF-Robotics

A comprehensive list of Implicit Representations and NeRF papers relating to Robotics/RL domain, including papers, codes, and related websites
1,229
star
2

Awesome-Robotics-3D

A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
470
star
3

NeO-360

Pytorch code for ICCV'23 paper. NEO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes
Python
225
star
4

shapo

Pytorch code for ECCV'22 paper. ShAPO: Implicit Representations for Multi-Object Shape, Appearance and Pose Optimization
Python
179
star
5

NeRF-MAE

[ECCV 2024] Pytorch code for our ECCV'24 paper NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields
Python
75
star
6

articulated-object-nerf

Experimental repo for modelling Articulated Object Neural Radiance Field
Python
47
star
7

manipulator_parameter_identification

Dynamic parameter identification of a 7-DOF robotic manipulator
C++
10
star
8

pointgoal_navigation_benchmarks

Supervised Learning Benchmarks for Point Goal Navigation in indoor cluttered environments in Habitat-API
Python
5
star
9

computer_vision_gatech

Computer Vision (CS 6476) Georgia Tech Assignment/Project Solutions
Jupyter Notebook
4
star
10

environement_perception_stack_for_self_driving_cars

Derivable surface estimation, Lane estimation, object and obstacle detection stack for self driving cars
Jupyter Notebook
2
star
11

zubair-irshad

2
star
12

7-DOF_Arm_Regression

Identifying the dynamic parameters of a 7-DOF robot arm for inverse dynamic control
MATLAB
1
star
13

complex_maze_navigation

ROS based turtlebot3 mobile robot navigator using sign recognition based on image classification.
Python
1
star
14

udacity_deep_rl

My solutions (with explanations) to the Udacity Deep Reinforcement Learning Nano Degree Program assignments, mini-projects and projects
Jupyter Notebook
1
star
15

imitation_learning

Python
1
star
16

zubair-irshad.github.io

HTML
1
star
17

sign_recognition_imageclassifier

Sign recognition using SVM classifier.
Python
1
star
18

mobilerobot_dynamic_obstacle_avoidance

Autonomous turtlebot3 navigation using given way points and uncertain obstacle avoidance
Python
1
star
19

classical_controllers_for_self_driving_car

Python
1
star