• Stars
    star
    267
  • Rank 148,456 (Top 4 %)
  • Language
    Java
  • License
    GNU General Publi...
  • Created about 9 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A simple and highly efficient RTS-game-inspired environment for reinforcement learning

Build Status

microRTS is a small implementation of an RTS game, designed to perform AI research. The advantage of using microRTS with respect to using a full-fledged game like Wargus or StarCraft (using BWAPI) is that microRTS is much simpler, and can be used to quickly test theoretical ideas, before moving on to full-fledged RTS games.

By default, microRTS is deterministic and real-time (i.e. players can issue actions simultaneously, and actions are durative). However, it is possible to experiment both with fully-observable and partially-observable games, as well as with deterministic and non-deterministic settings via configuration flags. As part of the implementation, I include a collection of hard-coded, and game-tree search techniques (such as variants of minimax, Monte Carlo search, and Monte Carlo Tree Search).

microRTS was developed by Santiago Ontañón.

MicroRTS-Py will eventually be updated, maintained, and made compliant with the standards of the Farama Foundation (https://farama.org/project_standards). However, this is currently a lower priority than other projects we're working to maintain. If you'd like to contribute to development, you can join our discord server here- https://discord.gg/jfERDCSw.

For a video of how microRTS looks like when a human plays, see a YouTube video

If you are interested in testing your algorithms against other people's, there is an annual microRTS competition. For more information on the competition see the competition website. The previous competitions have been organized at IEEE-CIG2017 and IEEE-CIG2018, and this year it's organized at IEEE-COG2019 (notice the change of name of the conference).

To cite microRTS, please cite this paper:

Santiago Ontañón (2013) The Combinatorial Multi-Armed Bandit Problem and its Application to Real-Time Strategy Games, In AIIDE 2013. pp. 58 - 64.

Setting up microRTS in an IDE

Watch this YouTube video to learn how to acquire microRTS and setup a project using Netbeans.

Reinforcement Learning in microRTS

If you'd like to use reinforcement learning in microRTS please check this project: https://github.com/vwxyzjn/gym-microrts

Executing microRTS through the terminal

If you want to build and run microRTS from source using the command line, clone or download this repository and run the following commands in the root folder of the project to compile the source code:

Linux or Mac OS:

javac -cp "lib/*:src" -d bin src/rts/MicroRTS.java # to build

Windows:

javac -cp "lib/*;src" -d bin src/rts/MicroRTS.java # to build

Generating a JAR file

You can join all compiled source files and dependencies into a single JAR file, which can be executed on its own. In order to create a JAR file for microRTS:

javac -cp "lib/*:src" -d bin $(find . -name "*.java") # compile source files
cd bin
find ../lib -name "*.jar" | xargs -n 1 jar xvf # extract the contents of the JAR dependencies
jar cvf microrts.jar $(find . -name '*.class' -type f) # create a single JAR file with sources and dependencies

Executing microRTS

To execute microRTS from compiled class files:

java -cp "lib/*:bin" rts.MicroRTS # on Linux/Mac
java -cp "lib/*;bin" rts.MicroRTS # on Windows

To execute microRTS from the JAR file:

java -cp microrts.jar rts.MicroRTS

Which class to execute

microRTS has multiple entry points, and for experimentation purposes you might eventually want to create your own class if none of the base ones suit your needs (see the "tests" folder for examples), but a default one is the gui.frontend.FrontEnd class, which opens the default GUI. To execute microRTS in this way, use the following command:

java -cp microrts.jar gui.frontend.FrontEnd

Another, more expansive entry point is the rts.MicroRTS class. It is capable of starting microRTS in multiple modes, such as in client mode (attempts to connect to a server which will provide commands to a bot), server mode (tries to connect to a client in order to control a bot), run a standalone game and exit or open the default GUI.

The rts.MicroRTS class accepts multiple initialization parameters, either from the command line or from a properties file. A list of all the acceptable command-line arguments can be accessed through the following command:

java -cp microrts.jar rts.MicroRTS -h

An example of a properties file is provided in the resources directory. microRTS can be started using a properties file with the following command:

java -cp microrts.jar rts.MicroRTS -f my_file.properties

Instructions

instructions image

More Repositories

1

Gymnasium

An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Python
5,815
star
2

PettingZoo

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
Python
2,392
star
3

HighwayEnv

A minimalist environment for decision-making in autonomous driving
Python
2,382
star
4

Arcade-Learning-Environment

The Arcade Learning Environment (ALE) -- a platform for AI research.
C++
2,078
star
5

Minigrid

Simple and easily configurable grid world environments for reinforcement learning
Python
2,016
star
6

ViZDoom

Reinforcement Learning environments based on the 1993 game Doom :godmode:
C++
1,673
star
7

chatarena

ChatArena (or Chat Arena) is a Multi-Agent Language Game Environments for LLMs. The goal is to develop communication and collaboration capabilities of AIs.
Python
1,227
star
8

D4RL

A collection of reference environments for offline reinforcement learning
Python
1,201
star
9

Metaworld

Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning
Python
1,116
star
10

Miniworld

Simple and easily configurable 3D FPS-game-like environments for reinforcement learning
Python
672
star
11

Gymnasium-Robotics

A collection of robotics simulation environments for reinforcement learning
Python
437
star
12

SuperSuit

A collection of wrappers for Gymnasium and PettingZoo environments (being merged into gymnasium.wrappers and pettingzoo.wrappers
Python
432
star
13

miniwob-plusplus

MiniWoB++: a web interaction benchmark for reinforcement learning
HTML
260
star
14

MO-Gymnasium

Multi-objective Gymnasium environments for reinforcement learning
Python
241
star
15

Minari

A standard format for offline reinforcement learning datasets, with popular reference datasets and related utilities
Python
219
star
16

MicroRTS-Py

A simple and highly efficient RTS-game-inspired environment for reinforcement learning (formerly Gym-MicroRTS)
Python
210
star
17

D4RL-Evaluations

Python
184
star
18

MAgent2

An engine for high performance multi-agent environments with very large numbers of agents, along with a set of reference environments
C++
183
star
19

stable-retro

Retro games for Reinforcement Learning
C
124
star
20

Shimmy

An API conversion tool for popular external reinforcement learning environments
Python
118
star
21

AutoROM

A tool to automate installing Atari ROMs for the Arcade Learning Environment
Python
75
star
22

gym-examples

Example code for the Gym documentation
Python
66
star
23

Jumpy

On-the-fly conversions between Jax and NumPy tensors
Python
43
star
24

gym-docs

Code for Gym documentation website
39
star
25

CrowdPlay

A web based platform for collecting human actions in reinforcement learning environments
Jupyter Notebook
26
star
26

Procgen2

Fast and procedurally generated side-scroller-game-like graphical environments (formerly Procgen)
C++
24
star
27

momaland

Benchmarks for Multi-Objective Multi-Agent Decision Making
Python
22
star
28

TinyScaler

A small and fast image rescaling library with SIMD support
C
19
star
29

rlay

A relay between Gymnasium and any software
Rust
7
star
30

gymnasium-env-template

A template gymnasium environment for users to build upon
Jinja
6
star
31

farama.org

HTML
2
star
32

A2Perf

A2Perf is a benchmark for evaluating agents on sequential decision problems that are relevant to the real world. This repository contains code for running and evaluating participant's submissions on the benchmark platform.
Python
2
star
33

gym-notices

Python
1
star
34

Celshast

Sass
1
star
35

MPE2

1
star
36

Farama-Notifications

Allows for providing notifications on import to all Farama Packages
Python
1
star
37

a2perf-benchmark-submission

Python
1
star
38

a2perf-web-nav

HTML
1
star
39

a2perf-quadruped-locomotion

Python
1
star
40

a2perf-reliability-metrics

Python
1
star
41

a2perf-code-carbon

Jupyter Notebook
1
star