I will take this project again very soon; the idea is to use state-of-the-art RL algorithms. Also, move to the new gazebo and the latest ROS version. Please let me know if you want to collaborate.
This repository includes: First, how to simulate a 6DoF Robotic Arm from scratch using GAZEBO and ROS2. Second, it provides a custom Reinforcement Learning Environment where you can test the Robotic Arm with your RL algorithms. Finally, we test the simulation and environment with a reacher target task, using RL and the 6DoF Robotic Arm with a visual target point.
Library | Version (TESTED) |
---|---|
Ubuntu | 20.04 |
ROS2 | Foxy link |
ros2_control | link |
gazebo_ros2_control | link |
In the following links you can find a step-by-step instruction section to run this repository and simulate the robotic arm:
-
Simulation in Gazebo and ROS2 --> Tutorial-link
- Configurate and spawn the robotic arm in Gazebo.
- Move the robot with a simple position controller.
-
Custom RL Environment --> Tutorial-link
- A complete Reinforcement Learning environment simulation.
-
Reacher task with RL --> Cooming soon
- Robot reacher task.
If you use either the code, data or the step from the tutorial-blog in your paper or project, please kindly star this repo and cite our webpage
I want to thank Doosan Robotics for their repositories, and packages where they took part of this code.
- https://github.com/doosan-robotics/doosan-robot2
- https://github.com/doosan-robotics/doosan-robot
- https://www.doosanrobotics.com/en/Index
Also, thanks to the authors of these repositories and their tutorials where I took some ideas
- https://github.com/noshluk2/ROS2-Ultimate-learners-Repository/tree/main/bazu
- https://github.com/TomasMerva/ROS_KUKA_env
Please feel free to contact me or open an issue if you have questions or need additional explanations.