• Stars
    star
    437
  • Rank 99,659 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

πŸ€– The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc

Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an Isaac Gym-based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Installation

Install from PyPI (stable version)

The installation is very easy,

pip install rofunc

# [Option] Install with baseline RL frameworks (SKRL, RLlib, Stable Baselines3) and Envs (gymnasium[all], mujoco_py)
pip install rofunc[baselines]

and as you'll find later, it's easy to use as well!

import rofunc as rf

Thus, have fun in the robotics world!

Note Several requirements need to be installed before using the package. Please refer to the installation guide for more details.

Install from Source (nightly version, recommended)

git clone https://github.com/Skylark0924/Rofunc.git
cd Rofunc

# Create a conda environment
# Python 3.8 is strongly recommended
conda create -n rofunc python=3.8

# For Linux user
sh ./scripts/install.sh
# [Option] Install with baseline RL frameworks (SKRL, RLlib, Stable Baselines3)
sh ./scripts/install_w_baselines.sh
# [Option] For MacOS user (brew is required, Isaac Gym based simulator is not supported on MacOS)
sh ./scripts/mac_install.sh

Note If you want to use functions related to ZED camera, you need to install ZED SDK manually. (We have tried to package it as a .whl file to add it to requirements.txt, unfortunately, the ZED SDK is not very friendly and doesn't support direct installation.)

Documentation

Documentation Example Gallery

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows.

Note βœ…: Achieved πŸ”ƒ: Reformatting β›”: TODO

Data Learning P&C Tools Simulator
xsens.record βœ… DMP β›” LQT βœ… Config βœ… Franka βœ…
xsens.export βœ… GMR βœ… LQTBi βœ… robolab.coord βœ… CURI βœ…
xsens.visual βœ… TPGMM βœ… LQTFb βœ… robolab.fk βœ… CURIMini πŸ”ƒ
opti.record βœ… TPGMMBi βœ… LQTCP βœ… robolab.ik βœ… CURISoftHand βœ…
opti.export βœ… TPGMM_RPCtl βœ… LQTCPDMP βœ… robolab.fd β›” Walker βœ…
opti.visual βœ… TPGMM_RPRepr βœ… LQR βœ… robolab.id β›” Gluon πŸ”ƒ
zed.record βœ… TPGMR βœ… PoGLQRBi βœ… visualab.dist βœ… Baxter πŸ”ƒ
zed.export βœ… TPGMRBi βœ… iLQR πŸ”ƒ visualab.ellip βœ… Sawyer πŸ”ƒ
zed.visual βœ… TPHSMM βœ… iLQRBi πŸ”ƒ visualab.traj βœ… Multi-Robot βœ…
emg.record βœ… RLBaseLine(SKRL) βœ… iLQRFb πŸ”ƒ
emg.export βœ… RLBaseLine(RLlib) βœ… iLQRCP πŸ”ƒ
emg.visual βœ… RLBaseLine(ElegRL) βœ… iLQRDyna πŸ”ƒ
mmodal.record β›” BCO(RofuncIL) πŸ”ƒ iLQRObs πŸ”ƒ
mmodal.export βœ… BC-Z(RofuncIL) β›” MPC β›”
STrans(RofuncIL) β›” RMP β›”
RT-1(RofuncIL) β›”
A2C(RofuncRL) βœ…
PPO(RofuncRL) βœ…
SAC(RofuncRL) βœ…
TD3(RofuncRL) βœ…
CQL(RofuncRL) β›”
TD3BC(RofuncRL) β›”
DTrans(RofuncRL) πŸ”ƒ
EDAC(RofuncRL) β›”
AMP(RofuncRL) βœ…
ASE(RofuncRL) βœ…
ODTrans(RofuncRL) β›”

Star History

Star History Chart

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@misc{Rofunc2022,
      author = {Liu, Junjia and Li, Chenzui and Delehelle, Donatien and Li, Zhihao and Chen, Fei},
      title = {Rofunc: The full process python package for robot learning from demonstration and robot manipulation},
      year = {2022},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/Skylark0924/Rofunc}},
}

Related Papers

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023|Code coming soon)
@article{liu2023softgpt,
  title={SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer},
  author={Liu, Junjia and Li, Zhihao and Calinon, Sylvain and Chen, Fei},
  journal={arXiv preprint arXiv:2306.12677},
  year={2023}
}
  1. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@article{liu2023birp,
  title={BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration},
  author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Chen, Fei},
  journal={arXiv preprint arXiv:2307.05933},
  year={2023}
}

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL

Planning and Control

  1. Robotics codes from scratch (RCFS)

More Repositories

1

Machine-Learning-is-ALL-You-Need

πŸ”₯πŸŒŸγ€ŠMachine Learning 格物志》: ML + DL + RL basic codes and notes by sklearn, PyTorch, TensorFlow, Keras & the most important, from scratch!πŸ’ͺ This repository is ALL You Need!
Python
369
star
2

Reinforcement-Learning-in-Robotics

This is a private learning repository for reinforcement learning techniques used in robotics.
HTML
247
star
3

awesome-bimanual-manipulation

Robot bimanual manipulation / dual-arm manipulation
116
star
4

System_Identification

The usage of MATLAB System Identification Toolbox and PID parameters adjustment
MATLAB
42
star
5

awesome-target-driven-navigation

Paper and summaries about state-of-the-art robot Target-driven Navigation task
40
star
6

Rofunc-ros

A Ros Package for Human-centered Interactive Intelligent Humanoid Robots
Python
22
star
7

Gamma_Reward

Python
13
star
8

advanced_robotics_project

Final project for ENGG 5402 Advanced Robotics in CUHK
Python
13
star
9

awesome-offlineRL-in-robotics

8
star
10

awesome-long-horizon-manipulation

6
star
11

TendonTrack_PILCO

Python
5
star
12

To_be_a_Roboticist

TeX
5
star
13

HuggingFace-finetune-tutorial

Python
5
star
14

DDPG-pytorch-tf-cpp

DDPG libtorch version + tensorflow c++ version: for real-time accelerate reinforcement learning
Makefile
4
star
15

RM_Infantry-1

Robomaster 2019 SJTU JDragon
C
3
star
16

awesome-model-based-reinforcement-learning

3
star
17

TendonTrack

Active Vision Tracking of a Tendon-driven continuum robot by using efficient model-based Reinforcement Learning
C
3
star
18

Solution_of_quantum_books

A gathering of unofficial solutions to the exercises in Quantum Computation and Quantum Information and Quantum Information Theory
TeX
2
star
19

Mathematics-for-engineering

Record my learning notes about Mathematics
2
star
20

awesome-data-efficient-reinforcement-learning

1
star