git clone https://github.com/liruiw/OMG-Planner.git --recursive
-
Setup: Ubuntu 16.04 or above, CUDA 10.0 or above
-
Install anaconda and create the virtual env for python 2 / 3
conda create --name omg python=3.6.9/2.7.15 conda activate omg pip install -r requirements.txt
-
Install ycb_render
cd ycb_render python setup.py develop
-
Install the submodule Sophus. Check if the submodule is correctly downloaded.
cd Sophus mkdir build cd build cmake .. -Wno-error=deprecated-declarations -Wno-deprecated-declarations make -j8 sudo make install
-
Install Eigen from the Github source code here
-
Compile the new layers under layers we introduce.
cd layers python setup.py install
-
Install the submodule PyKDL
cd orocos_kinematics_dynamics cd sip-4.19.3 python configure.py make -j8; sudo make install export ROS_PYTHON_VERSION=3 cd ../orocos_kdl mkdir build; cd build; cmake .. make -j8; sudo make install cd ../../python_orocos_kdl mkdir build; cd build; cmake .. -DPYTHON_VERSION=3.6.9 -DPYTHON_EXECUTABLE=~/anaconda2/envs/omg/bin/python3.6 make -j8; cp PyKDL.so ~/anaconda2/envs/omg/lib/python3.6/site-packages/
Install Docker and NVIDIA Docker.
Modify docker_build.py and docker_run.py to your needs.
- Build the image:
$ python docker/docker_build.py
- For local machines:
$ python docker/docker_run.py
- run
./download_data.sh
for data (Around 600 MB). - Run the planner to grasp objects.
python -m omg.core -v -f demo_scene_0 | python -m omg.core -v -f demo_scene_1 |
---|---|
- Run the planner from point cloud inputs.
python -m omg.core -v -f demo_scene_0 -p | python -m omg.core -v -f demo_scene_1 -p |
---|---|
- Run the planner in kitchen scenes with interfaces.
python -m real_world.trial -s script.txt -v -f kitchen0 | python -m real_world.trial -s script2.txt -v -f kitchen1 |
---|---|
- Run the planner in kitchen scenes with mouse clicks.
python -m real_world.trial_mouse -v -f kitchen0 | python -m real_world.trial_mouse -v -f kitchen1 |
---|---|
- Loop through the 100 generated scenes and write videos.
python -m omg.core -exp -w
-
Install PyBullet
pip install pybullet gym
(build with eglRender for faster rendering) -
Run planning in PyBullet simulator.
python -m bullet.panda_scene -v -f demo_scene_2 | python -m bullet.panda_scene -v -f demo_scene_3 |
---|---|
- Run planning in PyBullet simulator for kitchen scene.
python -m bullet.panda_kitchen_scene -v -f kitchen0 | python -m bullet.panda_kitchen_scene -v -f kitchen1 -s script2.txt |
---|---|
-
Loop through the 100 generated scenes and write videos.
python -m bullet.panda_scene -exp -w
-
Generate demonstration data data/demonstrations.
python -m bullet.gen_data -w
-
Visualize saved data.
python -m bullet.vis_data -o img
-
Generate related files for your own mesh. (.obj file in data/objects/)
python -m real_world.process_shape -a -f box_box000
-
Graspit can be used to generate grasps with this ros_package and the panda gripper. Then save the poses as numpy or json files to be used in OMG Planner. Alternatively one can use DexNet or direct physics simulation in Bullet.
βββ ...
βββ OMG
| |ββ data
| | |ββ grasps # grasps of the objects
| | |ββ objects # object meshes, sdf, urdf, etc
| | |ββ robots # robot meshes, urdf, etc
| | |ββ demonstrations # saved images of trajectory
| | βββ scenes # table-top planning scenes
| |ββ bullet
| | |ββ panda_scene # tabletop grasping environment
| | |ββ panda_kitchen_scene # pick-and-place environment for the cabinet scene
| | |ββ panda_gripper # bullet franka panda model with gripper
| | |ββ gen_data # generate and save trajectories
| | βββ vis_data # visualize saved data
| |ββ layers # faster SDF queries with CUDA
| |ββ omg # core algorithm code
| | |ββ core # planning scene and object/env definitions
| | |ββ config # config for planning scenes and planner
| | |ββ planner # OMG planner in a high-level
| | |ββ cost # different cost functions and gradients
| | |ββ online_learner # goal selection mechanism
| | |ββ optimizer # chomp and chomp-project update
| | βββ ...
| |ββ real_world # real-world related code
| | |ββ trial # cabinet environment with an interface
| | |ββ process_shape # generate required file from obj
| | βββ ...
| |ββ ycb_render # rendering code
| | |ββ robotPose # panda-specific robot kinematics
| | βββ ...
| βββ ...
βββ ...
- The environment can process either known object poses for databse target grasp or predicted grasps from other grasp detectors. It can use known obejct SDF for obstacle scenes or perceived point clouds as approximations.
- Parameters tuning can be found in
omg/config.py.
- Example manipulation pipelines for kitchen scenes are provided.
- Please use Github issue tracker to report bugs. For other questions please contact Lirui Wang.
If you find OMG-Planner useful in your research, please consider citing:
@inproceedings{wang2020manipulation,
title={Manipulation Trajectory Optimization with Online Grasp Synthesis and Selection},
author={Lirui Wang and Yu Xiang and Dieter Fox},
booktitle={Robotics: Science and Systems (RSS)},
year={2020}
}
The OMG Planner is licensed under the MIT License.