Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
Repository address: https://github.com/Skylark0924/Rofunc
Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an Isaac Gym-based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.
- Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
Installation
Install from PyPI (stable version)
The installation is very easy,
pip install rofunc
# [Option] Install with baseline RL frameworks (SKRL, RLlib, Stable Baselines3) and Envs (gymnasium[all], mujoco_py)
pip install rofunc[baselines]
and as you'll find later, it's easy to use as well!
import rofunc as rf
Thus, have fun in the robotics world!
Note Several requirements need to be installed before using the package. Please refer to the installation guide for more details.
Install from Source (nightly version, recommended)
git clone https://github.com/Skylark0924/Rofunc.git
cd Rofunc
# Create a conda environment
# Python 3.8 is strongly recommended
conda create -n rofunc python=3.8
# For Linux user
sh ./scripts/install.sh
# [Option] Install with baseline RL frameworks (SKRL, RLlib, Stable Baselines3)
sh ./scripts/install_w_baselines.sh
# [Option] For MacOS user (brew is required, Isaac Gym based simulator is not supported on MacOS)
sh ./scripts/mac_install.sh
Note If you want to use functions related to ZED camera, you need to install ZED SDK manually. (We have tried to package it as a
.whl
file to add it torequirements.txt
, unfortunately, the ZED SDK is not very friendly and doesn't support direct installation.)
Documentation
To give you a quick overview of the pipeline of rofunc
, we provide an interesting example of learning to play Taichi
from human demonstration. You can find it in the Quick start
section of the documentation.
The available functions and plans can be found as follows.
Note
β : Achievedπ : Reformattingβ : TODO
Data | Learning | P&C | Tools | Simulator | |||||
---|---|---|---|---|---|---|---|---|---|
xsens.record |
DMP |
LQT |
Config |
Franka |
|||||
xsens.export |
GMR |
LQTBi |
robolab.coord |
CURI |
|||||
xsens.visual |
TPGMM |
LQTFb |
robolab.fk |
CURIMini |
|||||
opti.record |
TPGMMBi |
LQTCP |
robolab.ik |
CURISoftHand |
|||||
opti.export |
TPGMM_RPCtl |
LQTCPDMP |
robolab.fd |
Walker |
|||||
opti.visual |
TPGMM_RPRepr |
LQR |
robolab.id |
Gluon |
|||||
zed.record |
TPGMR |
PoGLQRBi |
visualab.dist |
Baxter |
|||||
zed.export |
TPGMRBi |
iLQR |
visualab.ellip |
Sawyer |
|||||
zed.visual |
TPHSMM |
iLQRBi |
visualab.traj |
Multi-Robot |
|||||
emg.record |
RLBaseLine(SKRL) |
iLQRFb |
|||||||
emg.export |
RLBaseLine(RLlib) |
iLQRCP |
|||||||
emg.visual |
RLBaseLine(ElegRL) |
iLQRDyna |
|||||||
mmodal.record |
BCO(RofuncIL) |
iLQRObs |
|||||||
mmodal.export |
BC-Z(RofuncIL) |
MPC |
|||||||
STrans(RofuncIL) |
RMP |
||||||||
RT-1(RofuncIL) |
|||||||||
A2C(RofuncRL) |
|||||||||
PPO(RofuncRL) |
|||||||||
SAC(RofuncRL) |
|||||||||
TD3(RofuncRL) |
|||||||||
CQL(RofuncRL) |
|||||||||
TD3BC(RofuncRL) |
|||||||||
DTrans(RofuncRL) |
|||||||||
EDAC(RofuncRL) |
|||||||||
AMP(RofuncRL) |
|||||||||
ASE(RofuncRL) |
|||||||||
ODTrans(RofuncRL) |
Star History
Citation
If you use rofunc in a scientific publication, we would appreciate citations to the following paper:
@misc{Rofunc2022,
author = {Liu, Junjia and Li, Chenzui and Delehelle, Donatien and Li, Zhihao and Chen, Fei},
title = {Rofunc: The full process python package for robot learning from demonstration and robot manipulation},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Skylark0924/Rofunc}},
}
Related Papers
- Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={2},
pages={5159--5166},
year={2022},
publisher={IEEE}
}
- SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023ο½Code coming soon)
@article{liu2023softgpt,
title={SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer},
author={Liu, Junjia and Li, Zhihao and Calinon, Sylvain and Chen, Fei},
journal={arXiv preprint arXiv:2306.12677},
year={2023}
}
- BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@article{liu2023birp,
title={BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration},
author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Chen, Fei},
journal={arXiv preprint arXiv:2307.05933},
year={2023}
}
The Team
Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.
Acknowledge
We would like to acknowledge the following projects: