A clean and robust Pytorch implementation of PPO on continuous action space:
Pendulum | LunarLanderContinuous |
---|---|
Other RL algorithms by Pytorch can be found here.
gymnasium==0.29.1
numpy==1.26.1
pytorch==2.1.0
python==3.11.5
python main.py
where the default enviroment is 'Pendulum'.
python main.py --EnvIdex 0 --render True --Loadmodel True --ModelIdex 100
which will render the 'Pendulum'.
If you want to train on different enviroments, just run
python main.py --EnvIdex 1
The --EnvIdex
can be set to be 0~5, where
'--EnvIdex 0' for 'Pendulum-v1'
'--EnvIdex 1' for 'LunarLanderContinuous-v2'
'--EnvIdex 2' for 'Humanoid-v4'
'--EnvIdex 3' for 'HalfCheetah-v4'
'--EnvIdex 4' for 'BipedalWalker-v3'
'--EnvIdex 5' for 'BipedalWalkerHardcore-v3'
Note: if you want train on BipedalWalker, BipedalWalkerHardcore, or LunarLanderContinuous, you need to install box2d-py first. You can install box2d-py via:
pip install gymnasium[box2d]
if you want train on Humanoid or HalfCheetah, you need to install MuJoCo first. You can install MuJoCo via:
pip install mujoco
pip install gymnasium[mujoco]
You can use the tensorboard to record anv visualize the training curve.
- Installation (please make sure PyTorch is installed already):
pip install tensorboard
pip install packaging
- Record (the training curves will be saved at '\runs'):
python main.py --write True
- Visualization:
tensorboard --logdir runs
For more details of Hyperparameter Setting, please check 'main.py'
Proximal Policy Optimization Algorithms
Emergence of Locomotion Behaviours in Rich Environments
All the experiments are trained with same hyperparameters (see main.py).