• Stars
    star
    160
  • Rank 234,703 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created about 2 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Partially Observable Process Gym

POPGym: Partially Observable Process Gym

tests codecov

POPGym is designed to benchmark memory in deep reinforcement learning. It contains a set of environments and a collection of memory model baselines. The full paper is available on OpenReview.

Please see the documentation for advanced installation instructions and examples. The environment quickstart will get you up and running in a few minutes.

Quickstart Install

pip install popgym # base environments only, only requires numpy and gymnasium
pip install --use-pep517 "popgym[navigation]" # also include navigation environments, which require mazelib
pip install "popgym[baselines]" # environments and memory baselines

POPGym Environments

POPGym contains Partially Observable Markov Decision Process (POMDP) environments following the Gymnasium interface. POPGym environments have minimal dependencies and fast enough to solve on a laptop CPU in less than a day. We provide the following environments:

Environment Tags Temporal Ordering Colab FPS Macbook Air (2020) FPS
Battleship Game None 117,158 235,402
Concentration Game Weak 47,515 157,217
Higher Lower Game, Noisy None 24,312 76,903
Labyrinth Escape Navigation Strong 1,399 41,122
Labyrinth Explore Navigation Strong 1,374 30,611
Minesweeper Game None 8,434 32,003
Multiarmed Bandit Noisy None 48,751 469,325
Autoencode Diagnostic Strong 121,756 251,997
Count Recall Diagnostic, Noisy None 16,799 50,311
Repeat First Diagnostic None 23,895 155,201
Repeat Previous Diagnostic Strong 50,349 136,392
Position Only Cartpole Control Strong 73,622 218,446
Velocity Only Cartpole Control Strong 69,476 214,352
Noisy Position Only Cartpole Control, Noisy Strong 6,269 66,891
Position Only Pendulum Control Strong 8,168 26,358
Noisy Position Only Pendulum Control, Noisy Strong 6,808 20,090

Feel free to rerun this benchmark using this colab notebook.

POPGym Baselines

POPGym baselines implements recurrent and memory model in an efficient manner. POPGym baselines is implemented on top of rllib using their custom model API. We provide the following baselines:

  1. MLP
  2. Positional MLP
  3. Framestacking (Paper)
  4. Temporal Convolution Networks (Paper)
  5. Elman Networks (Paper)
  6. Long Short-Term Memory (Paper)
  7. Gated Recurrent Units (Paper)
  8. Independently Recurrent Neural Networks (Paper)
  9. Fast Autoregressive Transformers (Paper)
  10. Fast Weight Programmers (Paper)
  11. Legendre Memory Units (Paper)
  12. Diagonal State Space Models (Paper)
  13. Differentiable Neural Computers (Paper)

Leaderboard

The leaderboard is available at paperswithcode.

Contributing

Follow style and ensure tests pass

pip install pre-commit
pre-commit install
pytest popgym/tests

Citing

@inproceedings{
morad2023popgym,
title={{POPG}ym: Benchmarking Partially Observable Reinforcement Learning},
author={Steven Morad and Ryan Kortvelesy and Matteo Bettini and Stephan Liwicki and Amanda Prorok},
booktitle={The Eleventh International Conference on Learning Representations},
year={2023},
url={https://openreview.net/forum?id=chDrutUTs0K}
}

More Repositories

1

VectorizedMultiAgentSimulator

VMAS is a vectorized differentiable simulator designed for efficient Multi-Agent Reinforcement Learning benchmarking. It is comprised of a vectorized 2D physics engine written in PyTorch and a set of challenging multi-robot scenarios. Additional scenarios can be implemented through a simple and modular interface.
Python
312
star
2

gnn_pathplanning

Graph Neural Networks for Decentralized Path Planning
Python
191
star
3

magat_pathplanning

Python
98
star
4

rllib_differentiable_comms

This is a minimal example to demonstrate how multi-agent reinforcement learning with differentiable communication channels and centralized critics can be realized in RLLib. This example serves as a reference implementation and starting point for making RLLib more compatible with such architectures.
Python
39
star
5

minicar

Python
37
star
6

ros2_multi_agent_passage

Python
35
star
7

HetGPPO

Heterogeneous Multi-Robot Reinforcement Learning
Python
32
star
8

rl_multi_agent_passage

Repository containing RL environment, model and trainer for GNN demo for ICRA 2022 paper "A Framework for Real-World Multi-Robot Systems\\Running Decentralized GNN-Based Policies"
Python
31
star
9

adversarial_comms

Python
30
star
10

ffm

Reinforcement Learning with Fast and Forgetful Memory
Python
23
star
11

gnngls

Code accompanying the paper Graph Neural Network Guided Local Search for the Traveling Salesperson Problem
Python
21
star
12

DVM-SLAM

Jupyter Notebook
17
star
13

ModGNN

Python
16
star
14

graph-conv-memory

Graph convolutional memory
Python
15
star
15

ControllingBehavioralDiversity

This repository contains the code for Diversity Control (DiCo), a novel method to constrain behavioral diversity in multi-agent reinforcement learning.
Python
12
star
16

cambridge-robomaster

This is the source repository containing all information necessary to reproduce the Cambridge RoboMaster platform.
Python
9
star
17

private_flocking

TeX
7
star
18

task-agnostic-comms

Task-Agnostic Communication for Multi-Agent Reinforcement Learning
Python
4
star
19

resilient-fusion

Python
3
star
20

robomaster_ros2_can

ROS2 driver to control RoboMaster S1 using the internal CAN interface
C++
3
star
21

sensor-guided-visual-nav

Python
3
star
22

xaer

Python
2
star
23

robomaster_sdk_can

C++ library to command the RoboMaster S1 through the internal CAN bus
C++
2
star
24

memory-monoids

Python
2
star