• Stars
    star
    686
  • Rank 65,892 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 6 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with Gymnasium, PettingZoo, and popular RL libraries.

tests PyPI version pre-commit Code style: black License

SUMO-RL

SUMO-RL provides a simple interface to instantiate Reinforcement Learning (RL) environments with SUMO for Traffic Signal Control.

Goals of this repository:

  • Provide a simple interface to work with Reinforcement Learning for Traffic Signal Control using SUMO
  • Support Multiagent RL
  • Compatibility with gymnasium.Env and popular RL libraries such as stable-baselines3 and RLlib
  • Easy customisation: state and reward definitions are easily modifiable

The main class is SumoEnvironment. If instantiated with parameter 'single-agent=True', it behaves like a regular Gymnasium Env. For multiagent environments, use env or parallel_env to instantiate a PettingZoo environment with AEC or Parallel API, respectively. TrafficSignal is responsible for retrieving information and actuating on traffic lights using TraCI API.

For more details, check the documentation online.

Install

Install SUMO latest version:

sudo add-apt-repository ppa:sumo/stable
sudo apt-get update
sudo apt-get install sumo sumo-tools sumo-doc

Don't forget to set SUMO_HOME variable (default sumo installation path is /usr/share/sumo)

echo 'export SUMO_HOME="/usr/share/sumo"' >> ~/.bashrc
source ~/.bashrc

Important: for a huge performance boost (~8x) with Libsumo, you can declare the variable:

export LIBSUMO_AS_TRACI=1

Notice that you will not be able to run with sumo-gui or with multiple simulations in parallel if this is active (more details).

Install SUMO-RL

Stable release version is available through pip

pip install sumo-rl

Alternatively, you can install using the latest (unreleased) version

git clone https://github.com/LucasAlegre/sumo-rl
cd sumo-rl
pip install -e .

MDP - Observations, Actions and Rewards

Observation

The default observation for each traffic signal agent is a vector:

    obs = [phase_one_hot, min_green, lane_1_density,...,lane_n_density, lane_1_queue,...,lane_n_queue]
  • phase_one_hot is a one-hot encoded vector indicating the current active green phase
  • min_green is a binary variable indicating whether min_green seconds have already passed in the current phase
  • lane_i_density is the number of vehicles in incoming lane i dividided by the total capacity of the lane
  • lane_i_queueis the number of queued (speed below 0.1 m/s) vehicles in incoming lane i divided by the total capacity of the lane

You can define your own observation by implementing a class that inherits from ObservationFunction and passing it to the environment constructor.

Action

The action space is discrete. Every 'delta_time' seconds, each traffic signal agent can choose the next green phase configuration.

E.g.: In the 2-way single intersection there are |A| = 4 discrete actions, corresponding to the following green phase configurations:

Important: every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.

Rewards

The default reward function is the change in cumulative vehicle delay:

That is, the reward is how much the total delay (sum of the waiting times of all approaching vehicles) changed in relation to the previous time-step.

You can choose a different reward function (see the ones implemented in TrafficSignal) with the parameter reward_fn in the SumoEnvironment constructor.

It is also possible to implement your own reward function:

def my_reward_fn(traffic_signal):
    return traffic_signal.get_average_speed()

env = SumoEnvironment(..., reward_fn=my_reward_fn)

API's (Gymnasium and PettingZoo)

Gymnasium Single-Agent API

If your network only has ONE traffic light, then you can instantiate a standard Gymnasium env (see Gymnasium API):

import gymnasium as gym
import sumo_rl
env = gym.make('sumo-rl-v0',
                net_file='path_to_your_network.net.xml',
                route_file='path_to_your_routefile.rou.xml',
                out_csv_name='path_to_output.csv',
                use_gui=True,
                num_seconds=100000)
obs, info = env.reset()
done = False
while not done:
    next_obs, reward, terminated, truncated, info = env.step(env.action_space.sample())
    done = terminated or truncated

PettingZoo Multi-Agent API

For multi-agent environments, you can use the PettingZoo API (see Petting Zoo API):

import sumo_rl
env = sumo_rl.parallel_env(net_file='nets/RESCO/grid4x4/grid4x4.net.xml',
                  route_file='nets/RESCO/grid4x4/grid4x4_1.rou.xml',
                  use_gui=True,
                  num_seconds=3600)
observations = env.reset()
while env.agents:
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}  # this is where you would insert your policy
    observations, rewards, terminations, truncations, infos = env.step(actions)

RESCO Benchmarks

In the folder nets/RESCO you can find the network and route files from RESCO (Reinforcement Learning Benchmarks for Traffic Signal Control), which was built on top of SUMO-RL. See their paper for results.

Experiments

WARNING: Gym 0.26 had many breaking changes, stable-baselines3 and RLlib still do not support it, but will be updated soon. See Stable Baselines 3 PR and RLib PR. Hence, only the tabular Q-learning experiment is running without errors for now.

Check experiments for examples on how to instantiate an environment and train your RL agent.

Q-learning in a one-way single intersection:

python experiments/ql_single-intersection.py

RLlib A3C multiagent in a 4x4 grid:

python experiments/a3c_4x4grid.py

stable-baselines3 DQN in a 2-way single intersection:

python experiments/dqn_2way-single-intersection.py

Plotting results:

python outputs/plot.py -f outputs/2way-single-intersection/a3c

Citing

If you use this repository in your research, please cite:

@misc{sumorl,
    author = {Lucas N. Alegre},
    title = {{SUMO-RL}},
    year = {2019},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/LucasAlegre/sumo-rl}},
}

List of publications using SUMO-RL (please open a pull request to add missing entries):

More Repositories

1

morl-baselines

Multi-Objective Reinforcement Learning algorithms implementations.
Python
296
star
2

sac-plus

Soft Actor-Critic implementation with SOTA model-free extension (REDQ) and SOTA model-based extension (MBPO).
Python
12
star
3

mbcd

Code for the paper "Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection"
Python
9
star
4

SelfieArt

SelfieArt: Interactive Multi-Style Transfer for Selfies and Videos with Soft Transitions
Python
8
star
5

sfols

Code for the paper Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer - ICML 2022
Python
8
star
6

linear-rl

Reinforcement Learning with linear function approximation
Python
7
star
7

advent-of-code-2019

Python solutions to Advent of Code 2019.
Python
4
star
8

funk-generator

Earley Algorithm implementation to generate random brazilian funk music.
Java
3
star
9

backpropagation

Implementation of a fully connected neural network from scratch using numpy.
Python
2
star
10

space-invaders

Space Invaders game written in Kotlin using libGDX.
Kotlin
2
star
11

computational-photography

Code from the course INF01213/CMP570 - Fotografia Computacional / Computational Photography of UFRGS.
Python
2
star
12

t2fs

Sistema de Arquivos baseado no modelo UNIX i-node, trabalho da disciplina Sistemas Operacionais I da UFRGS.
C
1
star
13

AIG

And-Inverter Graph implementations.
C++
1
star
14

rl-visualization

Gym Env wrapper that runs a Flask application to visualize RL agents learning.
Python
1
star
15

advent-of-code-2021

Python solutions to Advent of Code 2021. https://adventofcode.com/2021
Python
1
star
16

advent-of-code-2020

Python solutions to Advent of Code 2020. https://adventofcode.com/2020/
Python
1
star
17

rl-course-ufrgs

Project from Reinforcement Learning course of UFRGS based on http://ai.berkeley.edu/reinforcement.html.
Python
1
star
18

tcp-fairness

Python client and server to analyze TCP's fairness.
Python
1
star