• Stars
    star
    1,076
  • Rank 42,988 (Top 0.9 %)
  • Language
    Python
  • License
    MIT License
  • Created about 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Proximal Policy Optimization (PPO) algorithm for Super Mario Bros

[PYTORCH] Proximal Policy Optimization (PPO) for playing Super Mario Bros

Introduction

Here is my python source code for training an agent to play super mario bros. By using Proximal Policy Optimization (PPO) algorithm introduced in the paper Proximal Policy Optimization Algorithms paper.

Talking about performance, my PPO-trained agent could complete 31/32 levels, which is much better than what I expected at the beginning.

For your information, PPO is the algorithm proposed by OpenAI and used for training OpenAI Five, which is the first AI to beat the world champions in an esports game. Specifically, The OpenAI Five dispatched a team of casters and ex-pros with MMR rankings in the 99.95th percentile of Dota 2 players in August 2018.









Sample results

Motivation

It has been a while since I have released my A3C implementation (A3C code) for training an agent to play super mario bros. Although the trained agent could complete levels quite fast and quite well (at least faster and better than I played πŸ˜…), it still did not totally satisfy me. The main reason is, agent trained with A3C could only complete 19/32 levels, no matter how much I fine-tuned and tested. It motivated me to look for a new approach.

Before I decided to choose PPO as my next complete implementation, I had partially implemented a couple of other algorithms, including A2C and Rainbow. While the former did not show a big jump in performance, the latter is more suitable for more randomized environments/games, like ping-pong or space invaders.

How to use my code

With my code, you can:

  • Train your model by running python train.py. For example: python train.py --world 5 --stage 2 --lr 1e-4
  • Test your trained model by running python test.py. For example: python test.py --world 5 --stage 2

Note: If you got stuck at any level, try training again with different learning rates. You could conquer 31/32 levels like what I did, by changing only learning rate. Normally I set learning rate as 1e-3, 1e-4 or 1e-5. However, there are some difficult levels, including level 1-3, in which I finally trained successfully with learning rate of 7e-5 after failed for 70 times.

Docker

For being convenient, I provide Dockerfile which could be used for running training as well as test phases

Assume that docker image's name is ppo. You only want to use the first gpu. You already clone this repository and cd into it.

Build:

sudo docker build --network=host -t ppo .

Run:

docker run --runtime=nvidia -it --rm --volume="$PWD"/../Super-mario-bros-PPO-pytorch:/Super-mario-bros-PPO-pytorch --gpus device=0 ppo

Then inside docker container, you could simply run train.py or test.py scripts as mentioned above.

Note: There is a bug for rendering when using docker. Therefore, when you train or test by using docker, please comment line env.render() on script src/process.py for training or test.py for test. Then, you will not be able to see the window pop up for visualization anymore. But it is not a big problem, since the training process will still run, and the test process will end up with an output mp4 file for visualization

Why there is still level 8-4 missing?

In world 4-4, 7-4 and 8-4, map consists of puzzles where the player must choose the correct the path in order to move forward. If you choose a wrong path, you have to go through path you visited again. With some hardcore setting for the environment, the first 2 levels are solved. But the last level has not been solved yet.

More Repositories

1

ASCII-generator

ASCII generator (image to text, image to image, video to video)
Python
1,522
star
2

Super-mario-bros-A3C-pytorch

Asynchronous Advantage Actor-Critic (A3C) algorithm for Super Mario Bros
Python
1,035
star
3

QuickDraw

Implementation of Quickdraw - an online game developed by Google
Python
891
star
4

Flappy-bird-deep-Q-learning-pytorch

Deep Q-learning for playing flappy bird game
Python
501
star
5

Tetris-deep-Q-learning-pytorch

Deep Q-learning for playing tetris game
Python
465
star
6

AirGesture

Play games without touching keyboard
Python
398
star
7

Hierarchical-attention-networks-pytorch

Hierarchical Attention Networks for document classification
Python
381
star
8

Yolo-v2-pytorch

YOLO for object detection tasks
Python
369
star
9

Photomosaic-generator

photomosaic generator (image to image, video to video)
Python
180
star
10

SSD-pytorch

SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity
Python
163
star
11

Street-fighter-A3C-ICM-pytorch

Curiosity-driven Exploration by Self-supervised Prediction for Street Fighter III Third Strike
Python
160
star
12

Contra-PPO-pytorch

Proximal Policy Optimization (PPO) algorithm for Contra
Python
132
star
13

Lego-generator

Python
98
star
14

QuickDraw-AirGesture-tensorflow

Implementation of QuickDraw - an online game developed by Google, combined with AirGesture - a simple gesture recognition application
Python
93
star
15

Chrome-dino-deep-Q-learning-pytorch

Deep Q-learning for playing chrome dino game
Python
70
star
16

Deeplab-pytorch

Deeplab for semantic segmentation tasks
Python
61
star
17

Character-level-cnn-pytorch

Character-level CNN for text classification
Python
55
star
18

Very-deep-cnn-pytorch

Very deep CNN for text classification
Python
37
star
19

Character-level-cnn-tensorflow

Character-level CNN for text classification
Python
29
star
20

Sonic-PPO-pytorch

Proximal Policy Optimization (PPO) algorithm for Sonic the Hedgehog
Python
26
star
21

uvipen

22
star
22

Very-deep-cnn-tensorflow

Very deep CNN for text classification
Python
21
star
23

Color-lines-deep-Q-learning-pytorch

Python
10
star
24

MathFun

Python
9
star
25

The-beauty-of-Math

Python
7
star
26

Detectors

Python
5
star
27

Vietnam-time-use-visualization

Python
4
star