• Stars
    star
    204
  • Rank 192,063 (Top 4 %)
  • Language
    Python
  • Created over 6 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An implementation of our RA-L work 'Real-world Multi-object, Multi-grasp Detection'

grasp_multiObject_multiGrasp

This is the implementation of our RA-L work 'Real-world Multi-object, Multi-grasp Detection'. The detector takes RGB-D image input and predicts multiple grasp candidates for a single object or multiple objects, in a single shot. The original arxiv paper can be found here. The final version will be updated after publication process.

drawing

If you find it helpful for your research, please consider citing:

@inproceedings{chu2018deep,
  title = {Real-World Multiobject, Multigrasp Detection},
  author = {F. Chu and R. Xu and P. A. Vela},
  journal = {IEEE Robotics and Automation Letters},
  year = {2018},
  volume = {3},
  number = {4},
  pages = {3355-3362},
  DOI = {10.1109/LRA.2018.2852777},
  ISSN = {2377-3766},
  month = {Oct}
}

If you encounter any questions, please contact me at fujenchu[at]gatech[dot]edu

Demo

  1. Clone this repository
git clone https://github.com/ivalab/grasp_multiObject_multiGrasp.git
cd grasp_multiObject_multiGrasp
  1. Build Cython modules
cd lib
make clean
make
cd ..
  1. Install Python COCO API
cd data
git clone https://github.com/pdollar/coco.git
cd coco/PythonAPI
make
cd ../../..
  1. Download pretrained models
  • trained model for grasp on dropbox drive
  • put under output/res50/train/default/
  1. Run demo
./tools/demo_graspRGD.py --net res50 --dataset grasp

you can see images pop out.

Train

  1. Generate data
    1-1. Download Cornell Dataset
    1-2. Run dataPreprocessingTest_fasterrcnn_split.m (please modify paths according to your structure)
    1-3. Follow 'Format Your Dataset' section here to check if your data follows VOC format

  2. Train

./experiments/scripts/train_faster_rcnn.sh 0 graspRGB res50

ROS version?

Yes! please find it HERE

Acknowledgment

This repo borrows tons of code from

Resources

More Repositories

1

grasp_multiObject

Robotic grasp dataset for multi-object multi-grasp evaluation with RGB-D data. This dataset is annotated using the same protocal as Cornell Dataset, and can be used as multi-object extension of Cornell Dataset.
MATLAB
104
star
2

GraspKpNet

Python
35
star
3

simData

The dataset of our RA-L work 'Learning Affordance Segmentation for Real-world Robotic Manipulation via Synthetic Images'
Python
34
star
4

KGN

[ICRA 2023 & IROS 2023] Code release for Keypoint-GraspNet (KGN) and Keypoint-GraspNet-V2 (KGNv2)
Python
33
star
5

FullResults_GoodFeature

Figures of full evaluation results for TRO submission "Good Feature Matching: Towards Accurate, Robust VO/VSLAM with Low Latency"
27
star
6

grasp_annotation_tool

a simple Matlab gui for annotating rotated grasping bounding box
MATLAB
27
star
7

AffKpNet

The implementation and supplementary material for our RA-L work "An Affordance Keypoint Detection Network for Robot Manipulation".
Python
23
star
8

grasp_primitiveShape

the implementation code of the paper "Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping"
C++
16
star
9

meta_ClosedLoopBench

Meta Package for Closed-Loop SLAM Benchmarking in Gazebo/ROS
10
star
10

NavBench

Benchmark Worlds for Testing Autonomous Navigation Algorithms
8
star
11

simData_imgSaver

This repo contains tools to save images of simulated objects in Gazebo for affordance segmentation
Python
7
star
12

Benchmarking_SLAM

Review & summary of SLAM benchmarks
MATLAB
7
star
13

ORB_Data

A collection of settings file (*.yaml) used for GF-ORB-SLAM evaluation
Python
6
star
14

ROS_image_puslisher_from_socket

get socket of RGBD image (from windows) and publish to ROS topic (on linux)
C++
5
star
15

WDiscOOD

[ICCV2023] Official Implementation of ICCV 2023 paper, WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant Analysis
Python
5
star
16

FullResults_GoodGraph

Figures of full evaluation results for TRO submission "Good Graph to Optimize: Budget-Aware, Cost-Effective Bundle Adjustment in Visual SLAM"
4
star
17

aruco_tag_saver_ros

ros node version of aruco tag saver (by Anina Mu)
Python
4
star
18

affordanceNet_Novel

An implementation of our RA-L work 'Toward Affordance Detection and Ranking on Novel Objects for Real-world Robotic Manipulation'
Python
4
star
19

affordanceNet_DA

An implementation of our RA-L work 'Learning Affordance Segmentation for Real-world Robotic Manipulation via Synthetic Images'
Jupyter Notebook
4
star
20

aruco_tag_saver

save ground truth with aruco tag (kinect, meta ports)
Python
3
star
21

me_sgl

Manipulation Experiment for Symbolic Goal Learning in a hybrid, modular framework for Human Instruction Following
Python
3
star
22

affordanceNet_Context

Python
3
star
23

MultiAffordanceNet

Implementation of "Toward Affordance Detection and Ranking on Novel Objects for Real-world Robotic Manipulation" submission
2
star
24

SGL_SGP_data_generator

Data generator based on AI2THOR for symbolic goal learning and scene graph parsing
Python
2
star
25

dbrt_for_handy

This is the package for running Probabilistic Articulated Real-time Tracking on the IVALab Handy robotic manipulator.
C++
1
star
26

gazebo_turtlebot_simulator

Closed-Loop SLAM Benchmarking Simulator based on Gazebo/ROS
Python
1
star
27

Scene_Graph_Parsing

Pretraining scene graph parsing tasks for symbolic goal learning of robotic manipulation
Python
1
star