• Stars
    star
    8,960
  • Rank 4,048 (Top 0.08 %)
  • Language
    Python
  • License
    MIT License
  • Created over 4 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

CI Documentation Status coverage report codestyle

Stable Baselines3

Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines.

You can read a detailed presentation of Stable Baselines3 in the v1.0 blog post or our JMLR paper.

These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details.

Note: Despite its simplicity of use, Stable Baselines3 (SB3) assumes you have some knowledge about Reinforcement Learning (RL). You should not utilize this library without some practice. To that extent, we provide good resources in the documentation to get started with RL.

Main Features

The performance of each algorithm was tested (see Results section in their respective page), you can take a look at the issues #48 and #49 for more details.

Features Stable-Baselines3
State of the art RL methods โœ”๏ธ
Documentation โœ”๏ธ
Custom environments โœ”๏ธ
Custom policies โœ”๏ธ
Common interface โœ”๏ธ
Dict observation space support โœ”๏ธ
Ipython / Notebook friendly โœ”๏ธ
Tensorboard support โœ”๏ธ
PEP8 code style โœ”๏ธ
Custom callback โœ”๏ธ
High code coverage โœ”๏ธ
Type hints โœ”๏ธ

Planned features

Please take a look at the Roadmap and Milestones.

Migration guide: from Stable-Baselines (SB2) to Stable-Baselines3 (SB3)

A migration guide from SB2 to SB3 can be found in the documentation.

Documentation

Documentation is available online: https://stable-baselines3.readthedocs.io/

Integrations

Stable-Baselines3 has some integration with other libraries/services like Weights & Biases for experiment tracking or Hugging Face for storing/sharing trained models. You can find out more in the dedicated section of the documentation.

RL Baselines3 Zoo: A Training Framework for Stable Baselines3 Reinforcement Learning Agents

RL Baselines3 Zoo is a training framework for Reinforcement Learning (RL).

It provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.

In addition, it includes a collection of tuned hyperparameters for common environments and RL algorithms, and agents trained with those settings.

Goals of this repository:

  1. Provide a simple interface to train and enjoy RL agents
  2. Benchmark the different Reinforcement Learning algorithms
  3. Provide tuned hyperparameters for each environment and RL algorithm
  4. Have fun with the trained agents!

Github repo: https://github.com/DLR-RM/rl-baselines3-zoo

Documentation: https://rl-baselines3-zoo.readthedocs.io/en/master/

SB3-Contrib: Experimental RL Features

We implement experimental features in a separate contrib repository: SB3-Contrib

This allows SB3 to maintain a stable and compact core, while still providing the latest features, like Recurrent PPO (PPO LSTM), Truncated Quantile Critics (TQC), Quantile Regression DQN (QR-DQN) or PPO with invalid action masking (Maskable PPO).

Documentation is available online: https://sb3-contrib.readthedocs.io/

Stable-Baselines Jax (SBX)

Stable Baselines Jax (SBX) is a proof of concept version of Stable-Baselines3 in Jax.

It provides a minimal number of features compared to SB3 but can be much faster (up to 20x times!): https://twitter.com/araffin2/status/1590714558628253698

Installation

Note: Stable-Baselines3 supports PyTorch >= 1.13

Prerequisites

Stable Baselines3 requires Python 3.8+.

Windows 10

To install stable-baselines on Windows, please look at the documentation.

Install using pip

Install the Stable Baselines3 package:

pip install stable-baselines3[extra]

Note: Some shells such as Zsh require quotation marks around brackets, i.e. pip install 'stable-baselines3[extra]' (More Info).

This includes an optional dependencies like Tensorboard, OpenCV or atari-py to train on atari games. If you do not need those, you can use:

pip install stable-baselines3

Please read the documentation for more details and alternatives (from source, using docker).

Example

Most of the code in the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms.

Here is a quick example of how to train and run PPO on a cartpole environment:

import gymnasium as gym

from stable_baselines3 import PPO

env = gym.make("CartPole-v1")

model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=10_000)

vec_env = model.get_env()
obs = vec_env.reset()
for i in range(1000):
    action, _states = model.predict(obs, deterministic=True)
    obs, reward, done, info = vec_env.step(action)
    vec_env.render()
    # VecEnv resets automatically
    # if done:
    #   obs = env.reset()

env.close()

Or just train a model with a one liner if the environment is registered in Gymnasium and if the policy is registered:

from stable_baselines3 import PPO

model = PPO("MlpPolicy", "CartPole-v1").learn(10_000)

Please read the documentation for more examples.

Try it online with Colab Notebooks !

All the following examples can be executed online using Google Colab notebooks:

Implemented Algorithms

Name Recurrent Box Discrete MultiDiscrete MultiBinary Multi Processing
ARS1 โŒ โœ”๏ธ โœ”๏ธ โŒ โŒ โœ”๏ธ
A2C โŒ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ
DDPG โŒ โœ”๏ธ โŒ โŒ โŒ โœ”๏ธ
DQN โŒ โŒ โœ”๏ธ โŒ โŒ โœ”๏ธ
HER โŒ โœ”๏ธ โœ”๏ธ โŒ โŒ โœ”๏ธ
PPO โŒ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ
QR-DQN1 โŒ โŒ โœ”๏ธ โŒ โŒ โœ”๏ธ
RecurrentPPO1 โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ
SAC โŒ โœ”๏ธ โŒ โŒ โŒ โœ”๏ธ
TD3 โŒ โœ”๏ธ โŒ โŒ โŒ โœ”๏ธ
TQC1 โŒ โœ”๏ธ โŒ โŒ โŒ โœ”๏ธ
TRPO1 โŒ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ
Maskable PPO1 โŒ โŒ โœ”๏ธ โœ”๏ธ โœ”๏ธ โœ”๏ธ

1: Implemented in SB3 Contrib GitHub repository.

Actions gym.spaces:

  • Box: A N-dimensional box that containes every point in the action space.
  • Discrete: A list of possible actions, where each timestep only one of the actions can be used.
  • MultiDiscrete: A list of possible actions, where each timestep only one action of each discrete set can be used.
  • MultiBinary: A list of possible actions, where each timestep any of the actions can be used in any combination.

Testing the installation

Install dependencies

pip install -e .[docs,tests,extra]

Run tests

All unit tests in stable baselines3 can be run using pytest runner:

make pytest

To run a single test file:

python3 -m pytest -v tests/test_env_checker.py

To run a single test:

python3 -m pytest -v -k 'test_check_env_dict_action'

You can also do a static type check using pytype and mypy:

pip install pytype mypy
make type

Codestyle check with ruff:

pip install ruff
make lint

Projects Using Stable-Baselines3

We try to maintain a list of projects using stable-baselines3 in the documentation, please tell us if you want your project to appear on this page ;)

Citing the Project

To cite this repository in publications:

@article{stable-baselines3,
  author  = {Antonin Raffin and Ashley Hill and Adam Gleave and Anssi Kanervisto and Maximilian Ernestus and Noah Dormann},
  title   = {Stable-Baselines3: Reliable Reinforcement Learning Implementations},
  journal = {Journal of Machine Learning Research},
  year    = {2021},
  volume  = {22},
  number  = {268},
  pages   = {1-8},
  url     = {http://jmlr.org/papers/v22/20-1364.html}
}

Maintainers

Stable-Baselines3 is currently maintained by Ashley Hill (aka @hill-a), Antonin Raffin (aka @araffin), Maximilian Ernestus (aka @ernestum), Adam Gleave (@AdamGleave), Anssi Kanervisto (@Miffyli) and Quentin Gallouรฉdec (@qgallouedec).

Important Note: We do not provide technical support, or consulting and do not answer personal questions via email. Please post your question on the RL Discord, Reddit, or Stack Overflow in that case.

How To Contribute

To any interested in making the baselines better, there is still some documentation that needs to be done. If you want to contribute, please read CONTRIBUTING.md guide first.

Acknowledgments

The initial work to develop Stable Baselines3 was partially funded by the project Reduced Complexity Models from the Helmholtz-Gemeinschaft Deutscher Forschungszentren, and by the EU's Horizon 2020 Research and Innovation Programme under grant number 951992 (VeriDream).

The original version, Stable Baselines, was created in the robotics lab U2IS (INRIA Flowers team) at ENSTA ParisTech.

Logo credits: L.M. Tenkes

More Repositories

1

BlenderProc

A procedural Blender pipeline for photorealistic training image generation
Python
2,083
star
2

rl-baselines3-zoo

A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Python
2,034
star
3

3DObjectTracking

Algorithms and Publications on 3D Object Tracking
C++
710
star
4

AugmentedAutoencoder

Official Code: Implicit 3D Orientation Learning for 6D Object Detection from RGB Images
Python
318
star
5

SingleViewReconstruction

Official Code: 3D Scene Reconstruction from a Single Viewport
Python
258
star
6

RAFCON

RAFCON (RMC advanced flow control) uses hierarchical state machines, featuring concurrent state execution, to represent robot programs. It ships with a graphical user interface supporting the creation of state machines and contains IDE like debugging mechanisms. Alternatively, state machines can programmatically be generated using RAFCON's API.
Python
180
star
7

rl-trained-agents

A collection of pre-trained RL agents using Stable Baselines3
Python
102
star
8

oaisys

Python
52
star
9

granite

C++
49
star
10

instr

code of paper โ€žUnknown Object Segmentation from Stereo Imagesโ€œ, IROS 2021
Python
43
star
11

curvature

Official Code: Estimating Model Uncertainty of Neural Networks in Sparse Information Form, ICML2020.
Python
23
star
12

UMF

Python
17
star
13

amp

Point-to-point motion planning library for articulated robots.
C++
13
star
14

DistinctNet

"What's This?" - Learning to Segment Unknown Objects from Manipulation Sequences
Python
11
star
15

GRACE

Graph Assembly processing networks for robotic assembly sequence planning and feasibility learning
Python
10
star
16

rosmc

ROS Mission Control (ROSMC) -- A high-level mission designining and monitoring tool with intuitive graphical interfaces
Python
9
star
17

python-jsonconversion

Convert arbitrary Python objects into JSON strings and back.
Python
8
star
18

moegplib

Official Code: Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes
Python
8
star
19

ExReNet

Learning to Localize in New Environments from Synthetic Training Data
Python
7
star
20

RAFCON-ros-state-machines

RAFCON state machine examples using the ROS middleware
Python
5
star
21

python-yaml-configuration

Python
4
star
22

BayesSim2Real

Source code for IROS 2022 paper: Bayesian Active Learning for Sim-to-Real Robotic Perception.
Python
4
star
23

TendonDrivenContinuum

4
star
24

multicam_dataset_reader

C++
2
star
25

stios-utils

utility functions for the [Stereo Instance on Surfaces (STOIS) dataset
Python
2
star
26

SemanticSingleViewReconstruction

C++
1
star
27

rafcon-task-planner-plugin

A Plugin for RAFCON to interface arbitrary PDDL Planner.
Python
1
star
28

RECALL

Code and image database for IROS2022 paper "RECALL: Rehearsal-free Continual Learning for Object Classification". A algorithm to learn new object categories on the fly without forgetting the old ones and without the need to save previous images.
Python
1
star