• Stars
    star
    1,105
  • Rank 42,003 (Top 0.9 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 2 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[Neurips 2023] Generating Mario Levels with GPT2. Code for the paper "MarioGPT: Open-Ended Text2Level Generation through Large Language Models" https://arxiv.org/abs/2302.05981

MarioGPT: Open-Ended Text2Level Generation through Large Language Models

Paper PyPi version HuggingFace Spaces Open In Colab

Playing Generated Level Generated Level
alt text alt text

How does it work?

Architecture Example Prompt Generations
alt text alt text

MarioGPT is a finetuned GPT2 model (specifically, distilgpt2), that is trained on a subset Super Mario Bros and Super Mario Bros: The Lost Levels levels, provided by The Video Game Level Corpus. MarioGPT is able to generate levels, guided by a simple text prompt. This generation is not perfect, but we believe this is a great first step more controllable and diverse level / environment generation. Forward generation:

alt text

Requirements

  • python3.8+

Installation

from pypi

pip install mario-gpt

or from source

git clone [email protected]:shyamsn97/mario-gpt.git
python setup.py install

Generating Levels

Since our models are built off of the amazing transformers library, we host our model in https://huggingface.co/shyamsn97/Mario-GPT2-700-context-length

This code snippet is the minimal code you need to generate a mario level!

from mario_gpt import MarioLM, SampleOutput

# pretrained_model = shyamsn97/Mario-GPT2-700-context-length

mario_lm = MarioLM()

# use cuda to speed stuff up
# import torch
# device = torch.device('cuda')
# mario_lm = mario_lm.to(device)

prompts = ["many pipes, many enemies, some blocks, high elevation"]

# generate level of size 1400, pump temperature up to ~2.4 for more stochastic but playable levels
generated_level = mario_lm.sample(
    prompts=prompts,
    num_steps=1400,
    temperature=2.0,
    use_tqdm=True
)

# show string list
generated_level.level

# show PIL image
generated_level.img

# save image
generated_level.img.save("generated_level.png")

# save text level to file
generated_level.save("generated_level.txt")

# play in interactive
generated_level.play()

# run Astar agent
generated_level.run_astar()

# Continue generation
generated_level_continued = mario_lm.sample(
    seed=generated_level,
    prompts=prompts,
    num_steps=1400,
    temperature=2.0,
    use_tqdm=True
)

# load from text file
loaded_level = SampleOutput.load("generated_level.txt")

# play from loaded (should be the same level that we generated)
loaded_level.play()
...

Training

The code to train MarioGPT is pretty simple and straightforward, the training class is located here, with a small example notebook

import torch
from mario_gpt import MarioDataset, MarioLM, TrainingConfig, MarioGPTTrainer

# create basic gpt model
BASE = "distilgpt2"
mario_lm = MarioLM(lm_path=BASE, tokenizer_path=BASE)

# create dataset
dataset = MarioDataset(mario_lm.tokenizer)

# create training config and trainer
config = TrainingConfig(save_iteration=10)
trainer = MarioGPTTrainer(mario_lm, dataset, config=config)

# train for 100 iterations!
trainer.train(100, batch_size=1)
See notebook for a more in depth tutorial to generate levels

Interacting with Levels

Right now there are two ways to interact with generated levels:

  1. Huggingface demo -- Thanks to the amazing work by multimodalart, you can generate and play levels interactively in the browser! In addition, gpus are provided so you don't have to own one yourself.
  2. Using the play and astar methods. These require you to have java installed on your computer (Java 8+ tested). For interactive, use the play() method and for astar use the run_astar method. Example:
from mario_gpt import MarioLM

mario_lm = MarioLM()

prompts = ["many pipes, many enemies, some blocks, high elevation"]

generated_level = mario_lm.sample(
    prompts=prompts,
    num_steps=1400,
    temperature=2.0,
    use_tqdm=True
)

# play in interactive
generated_level.play()

# run Astar agent
generated_level.run_astar()

Future Plans

Here's a list of some stuff that will be added to the codebase!

  • Basic inference code
  • Add MarioBert Model
  • Add Interactive simulator
  • Training code from paper
  • Inpainting functionality from paper
  • Open-ended level generation code
  • Different generation methods (eg. constrained beam search, etc.)

Authors

Shyam Sudhakaran [email protected], https://github.com/shyamsn97, https://shyamsn97.github.io/

Miguel González-Duque [email protected], https://github.com/miguelgondu

Claire Glanois [email protected], https://github.com/claireaoi

Matthias Freiberger [email protected], https://github.com/matfrei

Elias Najarro [email protected], https://github.com/enajx

Sebastian Risi [email protected], https://github.com/sebastianrisi, https://sebastianrisi.com/

Citation

If you use the code for academic or commecial use, please cite the associated paper:

@misc{https://doi.org/10.48550/arxiv.2302.05981,
  doi = {10.48550/ARXIV.2302.05981},
  
  url = {https://arxiv.org/abs/2302.05981},
  
  author = {Sudhakaran, Shyam and González-Duque, Miguel and Glanois, Claire and Freiberger, Matthias and Najarro, Elias and Risi, Sebastian},
  
  keywords = {Artificial Intelligence (cs.AI), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {MarioGPT: Open-Ended Text2Level Generation through Large Language Models},
  
  publisher = {arXiv},
  
  year = {2023},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}