• Stars
    star
    2,106
  • Rank 21,914 (Top 0.5 %)
  • Language
    C++
  • License
    GNU General Publi...
  • Created about 12 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The Arcade Learning Environment (ALE) -- a platform for AI research.

The Arcade Learning Environment Arcade Learning Environment

Continuous Integration PyPI Version

The Arcade Learning Environment (ALE) is a simple framework that allows researchers and hobbyists to develop AI agents for Atari 2600 games. It is built on top of the Atari 2600 emulator Stella and separates the details of emulation from agent design. This video depicts over 50 games currently supported in the ALE.

For an overview of our goals for the ALE read The Arcade Learning Environment: An Evaluation Platform for General Agents. If you use ALE in your research, we ask that you please cite this paper in reference to the environment. See the Citing section for BibTeX entries.

Features

  • Object-oriented framework with support to add agents and games.
  • Emulation core uncoupled from rendering and sound generation modules for fast emulation with minimal library dependencies.
  • Automatic extraction of game score and end-of-game signal for more than 100 Atari 2600 games.
  • Multi-platform code (compiled and tested under macOS, Windows, and several Linux distributions).
  • Python bindings through pybind11.
  • Native support for OpenAI Gym.
  • Visualization tools.

Quick Start

The ALE currently supports three different interfaces: C++, Python, and OpenAI Gym.

Python

You simply need to install the ale-py package distributed via PyPI:

pip install ale-py

Note: Make sure you're using an up to date version of pip or the install may fail.

You can now import the ALE in your Python projects with

from ale_py import ALEInterface

ale = ALEInterface()

ROM Management

The ALE doesn't distribute ROMs but we do provide a couple tools for managing your ROMs. First is the command line tool ale-import-roms. You can simply specify a directory as the first argument to this tool and we'll import all supported ROMs by the ALE.

ale-import-roms roms/

[SUPPORTED]       breakout   roms/breakout.bin
[SUPPORTED]       freeway    roms/freeway.bin

[NOT SUPPORTED]              roms/custom.bin

Imported 2/3 ROMs

Furthermore, Python packages can expose ROMs for discovery using the special ale-py.roms entry point. For more details check out the example python-rom-package.

Once you've imported a supported ROM you can simply import the path from the ale-py.roms package and load the ROM in the ALE:

from ale_py.roms import Breakout

ale.loadROM(Breakout)

OpenAI Gym

Gym support is included in ale-py. Simply install the Python package using the instructions above. You can also install gym[atari] which also installs ale-py with Gym.

As of Gym v0.20 and onwards all Atari environments are provided via ale-py. We do recommend using the new v5 environments in the ALE namespace:

import gym

env = gym.make('ALE/Breakout-v5')

The v5 environments follow the latest methodology set out in Revisiting the Arcade Learning Environment by Machado et al..

The only major change difference from Gym's AtariEnv is that we'd recommend not using the env.render() method in favour of supplying the render_mode keyword argument during environment initialization. The human render mode will give you the advantage of: frame perfect rendering, audio support, and proper resolution scaling. For more information check out docs/gym-interface.md.

For more information on changes to the Atari environments in OpenAI Gym please check out the following blog post.

C++

The following instructions will assume you have a valid C++17 compiler and vcpkg installed.

We use CMake as a first class citizen, and you can use the ALE directly with any CMake project. To compile and install the ALE you can run

mkdir build && cd build
cmake ../ -DCMAKE_BUILD_TYPE=Release
cmake --build . --target install

There are optional flags -DSDL_SUPPORT=ON/OFF to toggle SDL support (i.e., display_screen and sound support; OFF by default), -DBUILD_CPP_LIB=ON/OFF to build the ale-lib C++ target (ON by default), and -DBUILD_PYTHON_LIB=ON/OFF to build the pybind11 wrapper (ON by default).

Finally, you can link agaisnt the ALE in your own CMake project as follows

find_package(ale REQUIRED)
target_link_libraries(YourTarget ale::ale-lib)

Citing

If you use the ALE in your research, we ask that you please cite the following.

M. G. Bellemare, Y. Naddaf, J. Veness and M. Bowling. The Arcade Learning Environment: An Evaluation Platform for General Agents, Journal of Artificial Intelligence Research, Volume 47, pages 253-279, 2013.

In BibTeX format:

@Article{bellemare13arcade,
    author = {{Bellemare}, M.~G. and {Naddaf}, Y. and {Veness}, J. and {Bowling}, M.},
    title = {The Arcade Learning Environment: An Evaluation Platform for General Agents},
    journal = {Journal of Artificial Intelligence Research},
    year = "2013",
    month = "jun",
    volume = "47",
    pages = "253--279",
}

If you use the ALE with sticky actions (flag repeat_action_probability), or if you use the different game flavours (mode and difficulty switches), we ask you that you also cite the following:

M. C. Machado, M. G. Bellemare, E. Talvitie, J. Veness, M. J. Hausknecht, M. Bowling. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents, Journal of Artificial Intelligence Research, Volume 61, pages 523-562, 2018.

In BibTex format:

@Article{machado18arcade,
    author = {Marlos C. Machado and Marc G. Bellemare and Erik Talvitie and Joel Veness and Matthew J. Hausknecht and Michael Bowling},
    title = {Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents},
    journal = {Journal of Artificial Intelligence Research},
    volume = {61},
    pages = {523--562},
    year = {2018}
}

More Repositories

1

Gymnasium

An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Python
6,383
star
2

PettingZoo

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
Python
2,553
star
3

HighwayEnv

A minimalist environment for decision-making in autonomous driving
Python
2,506
star
4

Minigrid

Simple and easily configurable grid world environments for reinforcement learning
Python
2,051
star
5

ViZDoom

Reinforcement Learning environments based on the 1993 game Doom :godmode:
C++
1,723
star
6

chatarena

ChatArena (or Chat Arena) is a Multi-Agent Language Game Environments for LLMs. The goal is to develop communication and collaboration capabilities of AIs.
Python
1,344
star
7

D4RL

A collection of reference environments for offline reinforcement learning
Python
1,256
star
8

Metaworld

Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning
Python
1,178
star
9

Miniworld

Simple and easily configurable 3D FPS-game-like environments for reinforcement learning
Python
683
star
10

Gymnasium-Robotics

A collection of robotics simulation environments for reinforcement learning
Python
489
star
11

SuperSuit

A collection of wrappers for Gymnasium and PettingZoo environments (being merged into gymnasium.wrappers and pettingzoo.wrappers
Python
449
star
12

MO-Gymnasium

Multi-objective Gymnasium environments for reinforcement learning
Python
282
star
13

miniwob-plusplus

MiniWoB++: a web interaction benchmark for reinforcement learning
HTML
276
star
14

MicroRTS

A simple and highly efficient RTS-game-inspired environment for reinforcement learning
Java
271
star
15

Minari

A standard format for offline reinforcement learning datasets, with popular reference datasets and related utilities
Python
268
star
16

MicroRTS-Py

A simple and highly efficient RTS-game-inspired environment for reinforcement learning (formerly Gym-MicroRTS)
Python
219
star
17

MAgent2

An engine for high performance multi-agent environments with very large numbers of agents, along with a set of reference environments
C++
202
star
18

D4RL-Evaluations

Python
187
star
19

stable-retro

Retro games for Reinforcement Learning
C
146
star
20

Shimmy

An API conversion tool for popular external reinforcement learning environments
Python
129
star
21

AutoROM

A tool to automate installing Atari ROMs for the Arcade Learning Environment
Python
75
star
22

gym-examples

Example code for the Gym documentation
Python
68
star
23

momaland

Benchmarks for Multi-Objective Multi-Agent Decision Making
Python
58
star
24

Jumpy

On-the-fly conversions between Jax and NumPy tensors
Python
45
star
25

gym-docs

Code for Gym documentation website
41
star
26

Procgen2

Fast and procedurally generated side-scroller-game-like graphical environments (formerly Procgen)
C++
27
star
27

CrowdPlay

A web based platform for collecting human actions in reinforcement learning environments
Jupyter Notebook
26
star
28

TinyScaler

A small and fast image rescaling library with SIMD support
C
19
star
29

minari-dataset-generation-scripts

Scripts to recreate the D4RL datasets with Minari
Python
15
star
30

rlay

A relay between Gymnasium and any software
Rust
8
star
31

gymnasium-env-template

A template gymnasium environment for users to build upon
Jinja
7
star
32

A2Perf

A2Perf is a benchmark for evaluating agents on sequential decision problems that are relevant to the real world. This repository contains code for running and evaluating participant's submissions on the benchmark platform.
Python
4
star
33

farama.org

HTML
2
star
34

gym-notices

Python
1
star
35

Celshast

Sass
1
star
36

MPE2

A set of communication oriented environments
Python
1
star
37

Farama-Notifications

Allows for providing notifications on import to all Farama Packages
Python
1
star
38

a2perf-circuit-training

Python
1
star
39

a2perf-benchmark-submission

Python
1
star
40

a2perf-web-nav

HTML
1
star
41

a2perf-quadruped-locomotion

Python
1
star
42

a2perf-reliability-metrics

Python
1
star
43

a2perf-code-carbon

Jupyter Notebook
1
star