• Stars
    star
    146
  • Rank 252,769 (Top 5 %)
  • Language
    C
  • License
    MIT License
  • Created over 1 year ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Retro games for Reinforcement Learning

pre-commit Code style: black

Stable-Retro

A fork of gym-retro ('lets you turn classic video games into Gymnasium environments for reinforcement learning') with additional games, emulators and supported platforms. Since gym-retro is in maintenance now and doesn't accept new games, platforms or bug fixes, you can instead submit PRs with new games or features here in stable-retro.

Currently added games on top of gym-retro:

  • Super Mario Bros 2 Japan (Lost Levels) - NES
  • Hang On - SMS
  • Punch Out - NES
  • WWF Wrestlemania the Arcade Game - Genesis
  • NHL 94 - Genesis
  • NHL 94 (1 on 1 rom hack) - Genesis
  • Super Hang On - Genesis
  • Tetris - GameBoy
  • Virtua Fighter - 32x
  • Virtua Fighter 2 - Genesis
  • Virtua Fighter 2 - Saturn
  • Mortal Kombat 1 - Sega CD

PvP games that support two models fighting each other:

  • Samurai Showdown - Genesis
  • WWF Wrestlemania the Arcade Game - Genesis
  • Mortal Kombat II - Genesis
  • NHL 94 - Genesis

As well as additional states on already integrated games.

Emulated Systems

  • Atari
    • Atari2600 (via Stella)
  • NEC
    • TurboGrafx-16/PC Engine (via Mednafen/Beetle PCE Fast)
  • Nintendo
    • Game Boy/Game Boy Color (via gambatte)
    • Game Boy Advance (via mGBA)
    • Nintendo Entertainment System (via FCEUmm)
    • Super Nintendo Entertainment System (via Snes9x)
  • Sega
    • GameGear (via Genesis Plus GX)
    • Genesis/Mega Drive (via Genesis Plus GX)
    • Master System (via Genesis Plus GX)
    • 32x (via Picodrive)
    • Saturn (via Beetle Saturn)
    • Sega CD (via Genesis Plus GX)

Experimental (accessible in the fbneo branch)

  • Arcade Machines:
    • Neo Geo (MVS hardware: 1990–2004)
    • Sega System 1 (1983–1987)
    • Sega System 16 (And similar. 1985–1994)
    • Sega System 18 (1989–1992)
    • Sega System 24 (1988–1994)
    • Capcom CPS1 (1988–1995)
    • Capcom CPS2 (1993–2003)
    • Capcom CPS3 (1996–1999)

Full list of supported Arcade machines here

Installation

pip3 install stable-retro

or if the above doesn't work for your plateform:

pip3 install git+https://github.com/Farama-Foundation/stable-retro.git

If you plan to integrate new ROMs, states or emulator cores or plan to edit an existing env:

git clone https://github.com/Farama-Foundation/stable-retro.git
cd stable-retro
pip3 install -e .

Apple Silicon Installation (Tested on python3.10)

  • NOTE: The Game Boy (gambatte) emulator is not supported on Apple Silicon

Build from source

  1. pip install cmake wheel
  2. brew install pkg-config [email protected] libzip qt5 capnp
  3. echo 'export PATH="/opt/homebrew/opt/qt@5/bin:$PATH"' >> ~/.zshrc
  4. export SDKROOT=$(xcrun --sdk macosx --show-sdk-path)
  5. pip install -e .

Build Integration UI

  1. build package from source
  2. cmake . -DCMAKE_PREFIX_PATH=/usr/local/opt/qt -DBUILD_UI=ON -UPYLIB_DIRECTORY
  3. make -j$(sysctl hw.ncpu | cut -d: -f2)
  4. open "Gym Retro Integration.app"

Docker image for M1 Macs: https://github.com/arvganesh/stable-retro-docker

Example

'Nature CNN' model trained using PPO on Airstriker-Genesis env (rom already included in the repo)

Tested on Ubuntu 20.04 and Windows 11 WSL2 (Ubuntu 20.04 VM)

sudo apt-get update
sudo apt-get install python3 python3-pip git zlib1g-dev libopenmpi-dev ffmpeg

You need to install a stable baselines 3 version that supports gymnasium

pip3 install git+https://github.com/Farama-Foundation/stable-retro.git
pip3 install stable_baselines3[extra]

Start training:

cd retro/examples
python3 ppo.py --game='Airstriker-Genesis'

More advanced examples: https://github.com/MatPoliquin/stable-retro-scripts

Citation

@misc{stable-retro,
  author = {Poliquin, Mathieu},
  title = {Stable Retro, a maintained fork of OpenAI's gym-retro},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/Farama-Foundation/stable-retro}},
}

Tutorials

Windows WSL2 + Ubuntu 22.04 setup guide: https://www.youtube.com/watch?v=vPnJiUR21Og

Game Integration tool: https://www.youtube.com/playlist?list=PLmwlWbdWpZVvWqzOxu0jVBy-CaRpYha0t

Discord channel

Join here: https://discord.gg/dXuBSg3B4D

Contributing

See CONTRIBUTING.md

There is an effort to get this project to the Farama Foundation Project Standards. These development efforts are being coordinated in the stable-retro channel of the Farama Foundation's Discord. Click here for the invite

Supported specs:

Plateforms:

  • Windows 10, 11 (via WSL2)
  • macOS 10.13 (High Sierra), 10.14 (Mojave)
  • Linux (manylinux1). Ubuntu 22.04 is recommended

CPU with SSE3 or better

Supported Pythons: 3.7 to 3.12

Documentation

Documentation is available at https://stable-retro.farama.org/ (work in progress)

See LICENSES.md for information on the licenses of the individual cores.

ROMs

Each game integration has files listing memory locations for in-game variables, reward functions based on those variables, episode end conditions, savestates at the beginning of levels and a file containing hashes of ROMs that work with these files.

Please note that ROMs are not included and you must obtain them yourself. Most ROM hashes are sourced from their respective No-Intro SHA-1 sums.

Run this script in the roms folder you want to import. If the checksum matches it will import them in the related game folder in stable-retro.

python3 -m retro.import .

The following non-commercial ROMs are included with Stable Retro for testing purposes:

More Repositories

1

Gymnasium

An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Python
6,383
star
2

PettingZoo

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
Python
2,553
star
3

HighwayEnv

A minimalist environment for decision-making in autonomous driving
Python
2,506
star
4

Arcade-Learning-Environment

The Arcade Learning Environment (ALE) -- a platform for AI research.
C++
2,106
star
5

Minigrid

Simple and easily configurable grid world environments for reinforcement learning
Python
2,051
star
6

ViZDoom

Reinforcement Learning environments based on the 1993 game Doom :godmode:
C++
1,723
star
7

chatarena

ChatArena (or Chat Arena) is a Multi-Agent Language Game Environments for LLMs. The goal is to develop communication and collaboration capabilities of AIs.
Python
1,344
star
8

D4RL

A collection of reference environments for offline reinforcement learning
Python
1,256
star
9

Metaworld

Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning
Python
1,178
star
10

Miniworld

Simple and easily configurable 3D FPS-game-like environments for reinforcement learning
Python
683
star
11

Gymnasium-Robotics

A collection of robotics simulation environments for reinforcement learning
Python
489
star
12

SuperSuit

A collection of wrappers for Gymnasium and PettingZoo environments (being merged into gymnasium.wrappers and pettingzoo.wrappers
Python
449
star
13

MO-Gymnasium

Multi-objective Gymnasium environments for reinforcement learning
Python
282
star
14

miniwob-plusplus

MiniWoB++: a web interaction benchmark for reinforcement learning
HTML
276
star
15

MicroRTS

A simple and highly efficient RTS-game-inspired environment for reinforcement learning
Java
271
star
16

Minari

A standard format for offline reinforcement learning datasets, with popular reference datasets and related utilities
Python
268
star
17

MicroRTS-Py

A simple and highly efficient RTS-game-inspired environment for reinforcement learning (formerly Gym-MicroRTS)
Python
219
star
18

MAgent2

An engine for high performance multi-agent environments with very large numbers of agents, along with a set of reference environments
C++
202
star
19

D4RL-Evaluations

Python
187
star
20

Shimmy

An API conversion tool for popular external reinforcement learning environments
Python
129
star
21

AutoROM

A tool to automate installing Atari ROMs for the Arcade Learning Environment
Python
75
star
22

gym-examples

Example code for the Gym documentation
Python
68
star
23

momaland

Benchmarks for Multi-Objective Multi-Agent Decision Making
Python
58
star
24

Jumpy

On-the-fly conversions between Jax and NumPy tensors
Python
45
star
25

gym-docs

Code for Gym documentation website
41
star
26

Procgen2

Fast and procedurally generated side-scroller-game-like graphical environments (formerly Procgen)
C++
27
star
27

CrowdPlay

A web based platform for collecting human actions in reinforcement learning environments
Jupyter Notebook
26
star
28

TinyScaler

A small and fast image rescaling library with SIMD support
C
19
star
29

minari-dataset-generation-scripts

Scripts to recreate the D4RL datasets with Minari
Python
15
star
30

rlay

A relay between Gymnasium and any software
Rust
8
star
31

gymnasium-env-template

A template gymnasium environment for users to build upon
Jinja
7
star
32

A2Perf

A2Perf is a benchmark for evaluating agents on sequential decision problems that are relevant to the real world. This repository contains code for running and evaluating participant's submissions on the benchmark platform.
Python
4
star
33

farama.org

HTML
2
star
34

gym-notices

Python
1
star
35

Celshast

Sass
1
star
36

MPE2

A set of communication oriented environments
Python
1
star
37

Farama-Notifications

Allows for providing notifications on import to all Farama Packages
Python
1
star
38

a2perf-circuit-training

Python
1
star
39

a2perf-benchmark-submission

Python
1
star
40

a2perf-web-nav

HTML
1
star
41

a2perf-quadruped-locomotion

Python
1
star
42

a2perf-reliability-metrics

Python
1
star
43

a2perf-code-carbon

Jupyter Notebook
1
star