Awesome-Robot-Learning
This repo contains a curative list of robot learning (mainly for manipulation) resources, inspired by Awesome-Implicit-NeRF-Robotics.
Motivation: Robot learning, especially robot manipulation skills learning, is receiving more and more attention, but since there are numerous subdivisions of robot learning and a dazzling array of approaches, this repo lists some of the researchers active in the field and the simulation environments used to test their algorithms to save researchers time in searching and focusing on their own algorithms. Related research papers are beyond the scope of this repo.
Please feel free to send me pull requests or email to add items!
If you find this repo useful, please consider STARing this list and feel free to share this list with others!
Overview
Related Awesome Lists
- Awesome Robotics (Kiloreux)
- Awesome Robotics (ahundt)
- Awesome Robotic Tooling
- Awesome Robotics Libraries
- Awesome Reinforcement Learning
- Awesome Robot Descriptions
- Awesome NVIDIA Isaac Gym
- Awesome RL Envs
Laboratories
- Berkeley Robot Learning Lab
- Berkeley Robotic AI & Learning Lab
- CMU Robotics Institute
- Stanford Vision and Learning Lab (SVL)
- UT Austin Robot Perception and Learning Lab [Github]
- TU Darmstadt Intelligent Autonomous Systems: Machine Learning for Intelligent Autonomous Robots
- ETH Robotic Systems Lab
Active Researchers
Name | Institution | Name | Institution |
---|---|---|---|
Pieter Abbeel | UC Berkeley | Sergey Levine | UC Berkeley |
Jan Peters | TU Darmstadt | Sethu Vijayakumar | University of Edinburgh |
Huaping Liu | Tsinghua University | Andy Zeng [Github] | Google Brain |
Yuke Zhu [Github] | UT-Austin | Cewu Lu | Shanghai Jiaotong University |
Huazhe Xu | Tsinghua University | Edward Johns | Imperial College London |
Hao Dong | Peking University | Yunzhu Li [Github] | UIUC |
Yang Gao | Tsinghua University | Xiaolong Wang | UC San Diego |
Nicklas Hansen | UC San Diego | Wenyu Liang | A star |
Abhinav Valada | University of Freiburg | Dorsa Sadigh | Stanford |
Benchmarks
MuJoCo-based
- dm_control: DeepMind Infrastructure for Physics-Based Simulation
- dm_robotics: Libraries, tools, and tasks created and used for Robotics research at DeepMind
- robosuite: A Modular Simulation Framework and Benchmark for Robot Learning
- Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning
- Gymnasium-Robotics
- RoboPianist: A Benchmark for High-Dimensional Robot Control
- IKEA Furniture Assembly Environment
- Divide-and-Conquer Reinforcement Learning Catching and Lobbing
- DoorGym
- RoboHive
PyBullet-based
- PyBullet Gymperium
- panda-gym: Set of robotic environments based on PyBullet physics engine and gymnasium
- MiniTouch benchmark It allows evaluation of models' performance on different manipulation tasks that can leverage cross-modal learning.
- Calvin: A Benchmark for Language-conditioned Policy Learning for Long-horizon Robot Manipulation Tasks
- Pybullet-implementation of the multi-goal robotics environment originally from Open AI Gym
- Toolkit for Vision-Guided Quadrupedal Locomotion Research
Issac-based
- Omniverse Isaac Gym Reinforcement Learning Environments for Isaac Sim
- Omniverse Isaac Orbit (Recommended)
- Isaac-ManipulaRL
Others
- RLBench:Robot Learning Benchmark
- Thrower and Goalie Robot Arms
- SoftGym
- VIMA-Bench: Benchmark for Multimodal Robot Learning
Datasets
- D4RL: Datasets for Deep Data-Driven Reinforcement Learning For offline RL.