• Stars
    star
    202
  • Rank 193,691 (Top 4 %)
  • Language
    C++
  • License
    MIT License
  • Created over 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A Real-Time-Strategy game for Deep Learning research

Description Build Status Documentation GitHub license

DeepRTS is a high-performance Real-TIme strategy game for Reinforcement Learning research. It is written in C++ for performance, but provides an python interface to better interface with machine-learning toolkits. Deep RTS can process the game with over 6 000 000 steps per second and 2 000 000 steps when rendering graphics. In comparison to other solutions, such as StarCraft, this is over 15 000% faster simulation time running on Intel i7-8700k with Nvidia RTX 2080 TI.

The aim of Deep RTS is to bring a more affordable and sustainable solution to RTS AI research by reducing computation time.

It is recommended to use the master-branch for the newest (and usually best) version of the environment. I am greatful for any input in regards to improving the environment.

Please use the following citation when using this in your work!

@INPROCEEDINGS{8490409,
author={P. {Andersen} and M. {Goodwin} and O. {Granmo}},
booktitle={2018 IEEE Conference on Computational Intelligence and Games (CIG)},
title={Deep RTS: A Game Environment for Deep Reinforcement Learning in Real-Time Strategy Games},
year={2018},
volume={},
number={},
pages={1-8},
keywords={computer games;convolution;feedforward neural nets;learning (artificial intelligence);multi-agent systems;high-performance RTS game;artificial intelligence research;deep reinforcement learning;real-time strategy games;computer games;RTS AIs;Deep RTS game environment;StarCraft II;Deep Q-Network agent;cutting-edge artificial intelligence algorithms;Games;Learning (artificial intelligence);Machine learning;Planning;Ground penetrating radar;Geophysical measurement techniques;real-time strategy game;deep reinforcement learning;deep q-learning},
doi={10.1109/CIG.2018.8490409},
ISSN={2325-4270},
month={Aug},}

Dependencies

  • Python >= 3.9.1

Installation

Method 1 (From Git Repo)

sudo pip3 install git+https://github.com/cair/DeepRTS.git

Method 2 (Clone & Build)

git clone https://github.com/cair/deep-rts.git
cd deep-rts
git submodule sync
git submodule update --init
sudo pip3 install .

Available maps

10x10-2-FFA
15x15-2-FFA
21x21-2-FFA
31x31-2-FFA
31x31-4-FFA
31x31-6-FFA

Scenarios

Deep RTS features scenarios which is pre-built mini-games. These mini-games is well suited to train agents on specific tasks, or to test algorithms in different problem setups. The benefits of using scenarios is that you can trivially design reward functions using criterias that each outputs a reward/punishment signal depending on completion of the task. Examples of tasks are to:

  • collect 1000 gold
  • do 100 damage
  • take 1000 damage
  • defeat 5 enemies

Deep RTS currently implements the following scenarios

GoldCollectFifteen
GeneralAIOneVersusOne

Minimal Example

import random
from DeepRTS.python import Config
from DeepRTS.python import scenario

if __name__ == "__main__":
    random_play = True
    episodes = 100

    for i in range(episodes):
        env = scenario.GeneralAI_1v1(Config.Map.THIRTYONE)
        state = env.reset()
        done = False

        while not done:
            env.game.set_player(env.game.players[0])
            action = random.randrange(15)
            next_state, reward, done, _ = env.step(action)
            state = next_state

            if (done):
                break

            env.game.set_player(env.game.players[1])
            action = random.randrange(15)
            next_state, reward, done, _ = env.step(action)
            state = next_state

In-Game Footage

10x10 - 2 Player - free-for-all

15x15 - 2 Player - free-for-all

21x21 - 2 Player - free-for-all

31x31 - 2 Player - free-for-all

31x31 - 4 Player - free-for-all

31x3 - 6 Player - free-for-all

More Repositories

1

TsetlinMachine

Code and datasets for the Tsetlin Machine
Cython
444
star
2

Fire-Detection-Image-Dataset

This dataset contains normal images and images with fire. It is highly unbalanced to reciprocate real world situations. It consists of a variety of scenarios and different fire situations (intensity, luminosity, size, environment etc).
208
star
3

pyTsetlinMachine

Implements the Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, Weighted Tsetlin Machine, and Embedding Tsetlin Machine, with support for continuous features, multigranularity, clause indexing, and literal budget
C
120
star
4

tmu

Implements the Tsetlin Machine, Coalesced Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, and Weighted Tsetlin Machine, with support for continuous features, drop clause, Type III Feedback, focused negative sampling, multi-task classifier, autoencoder, literal budget, and one-vs-one multi-class classifier. TMU is written in Python with wrappers for C and CUDA-based clause evaluation and updating.
Python
106
star
5

pyVNC

VNC Client Library for Python
Python
82
star
6

fast-tsetlin-machine-with-mnist-demo

A fast Tsetlin Machine implementation employing bit-wise operators, with MNIST demo.
C
61
star
7

convolutional-tsetlin-machine-tutorial

Tutorial on the Convolutional Tsetlin Machine
Python
51
star
8

TextUnderstandingTsetlinMachine

Using the Tsetlin Machine to learn human-interpretable rules for high-accuracy text categorization with medical applications
Cuda
48
star
9

PyTsetlinMachineCUDA

Massively Parallel and Asynchronous Architecture for Logic-based AI
Python
41
star
10

pyTsetlinMachineParallel

Multi-threaded implementation of the Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, and Weighted Tsetlin Machine, with support for continuous features and multigranularity.
C
39
star
11

TsetlinMachineBook

Python code accompanying the book "An Introduction to Tsetlin Machines".
Jupyter Notebook
31
star
12

FlashRL

Python
26
star
13

fast-tsetlin-machine-in-cuda-with-imdb-demo

A CUDA implementation of the Tsetlin Machine based on bitwise operators
Cuda
26
star
14

deep_maze

Python
22
star
15

open-tsetlin-machine

Open Source Tsetlin Machine framework
17
star
16

TsetlinMachineC

A C implementation of the Tsetlin Machine
C
14
star
17

rl

C++
10
star
18

awesome-tsetlin-machine

A curated list of Tsetlin Machine research
10
star
19

regression-tsetlin-machine

Implementation of the Regression Tsetlin Machine
Python
9
star
20

deep-warehouse

A Simulator for complex logistic environments
Python
7
star
21

TM-XOR-proof

#tsetlin-machine #machine-learning #game-theory #propositional-logic #pattern-recognition #bandit-learning #frequent-pattern-mining #learning-automata
Cython
5
star
22

Axis_and_Allies

A simple Axis & Allies engine.
Python
5
star
23

python-fast-tsetlin-machine

Python wrapper for https://github.com/cair/fast-tsetlin-machine-with-mnist-demo
C
3
star
24

tmu-datasets

A dataset repository for datasets in tmu
Python
3
star
25

ICML-Massively-Parallel-and-Asynchronous-Tsetlin-Machine-Architecture

Code repository for ICML 21 for Paper titled Massively Parallel and Asynchronous Tsetlin Machine Architecture
Python
3
star
26

ikt111

Python
3
star
27

notebooks

A collection of jupyter notebooks
Jupyter Notebook
2
star
28

Fire-Scene-Parsing

2
star
29

py_image_stitcher

A small library for stitching together images, from Numpy or PIL Sources
Python
1
star
30

deep-line-wars

Python
1
star
31

Docker-Tutorial

A docker tutorial for cair-gpu's
Python
1
star
32

ray-bugfix

A workaround to issues with Rllib, given it does not work for your current gym environment. CarRacing-v0 is one of these.
Python
1
star
33

fire

Python
1
star
34

deep-line-wars-2

C++
1
star
35

Deterministic-Tsetlin-Machine

Due to the high energy consumption and scalability challenges of deep learning, there is a critical need to shift research focus towards dealing with energy consumption constraints. Tsetlin Machines (TMs) are a recent approach to machine learning that has demonstrated significantly reduced energy usage compared to neural networks alike, while performing competitively accuracy-wise on several benchmarks. However, TMs rely heavily on energy-costly random number generation to stochastically guide a team of Tsetlin Automata to a Nash Equilibrium of the TM game. In this paper, we propose a novel finite-state learning automaton that can replace the Tsetlin Automata in TM learning, for increased determinism. The new automaton uses multi-step deterministic state jumps to reinforce sub-patterns. Simultaneously, flipping a coin to skip every d'th state update ensures diversification by randomization. The d-parameter thus allows the degree of randomization to be finely controlled. E.g., d=1 makes every update random and d=infinity makes the automaton completely deterministic. Our empirical results show that, overall, only substantial degrees of determinism reduces accuracy. Energy-wise, random number generation constitutes switching energy consumption of the TM, saving up to 11 mW power for larger datasets with high d values. We can thus use the new d-parameter to trade off accuracy against energy consumption, to facilitate low-energy machine learning.
Python
1
star
36

DeepAxie

Implementation of a simplified Axie Infinity Environment in C++ that is used to train an agent with the reinforcement learning algorithm DQN to play the game.
C++
1
star
37

Tsetlin-Machine-Deep-Neural-Network-Recommendation-System-Comparison

Jupyter Notebook
1
star
38

LogicalTransformerWithTsetlinMachine

Python
1
star