• Stars
    star
    1,344
  • Rank 34,950 (Top 0.7 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

ChatArena (or Chat Arena) is a Multi-Agent Language Game Environments for LLMs. The goal is to develop communication and collaboration capabilities of AIs.

🏟 ChatArena

Multi-Agent Language Game Environments for LLMs

License: Apache2 PyPI Python 3.9+ twitter slack Open In Colab


ChatArena is a library that provides multi-agent language game environments and facilitates research about autonomous LLM agents and their social interactions. It provides the following features:

  • Abstraction: it provides a flexible framework to define multiple players, environments and the interactions between them, based on Markov Decision Process.
  • Language Game Environments: it provides a set of environments that can help understanding, benchmarking or training agent LLMs.
  • User-friendly Interfaces: it provides both Web UI and CLI to develop/prompt engineer your LLM agents to act in environments.

ChatArena Architecture

Getting Started

Try our online demo: demo Demo video

Installation

Requirements:

  • Python >= 3. 7
  • OpenAI API key (optional, for using GPT-3.5-turbo or GPT-4 as an LLM agent)

Install with pip:

pip install chatarena

or install from source:

pip install git+https://github.com/chatarena/chatarena

To use GPT-3 as an LLM agent, set your OpenAI API key:

export OPENAI_API_KEY="your_api_key_here"

Optional Dependencies

By default pip install chatarena will only install dependencies necessary for ChatArena's core functionalities. You can install optional dependencies with the following commands:

pip install chatarena[all_backends] # install dependencies for all supported backends: anthropic, cohere, huggingface, etc.
pip install chatarena[all_envs]     # install dependencies for all environments, such as pettingzoo
pip install chatarena[all]          # install all optional dependencies for full functionality

Launch the Demo Locally

The quickest way to see ChatArena in action is via the demo Web UI. To launch the demo on your local machine, you first pip install chatarena with extra gradio dependency, then git clone this repository to your local folder, and finally call the app.py in the root directory of the repository:

pip install chatarena[gradio]
git clone https://github.com/chatarena/chatarena.git
cd chatarena
gradio app.py

This will launch a demo server for ChatArena, and you can access it from your browser (port 8080).

Check out this video to learn how to use Web UI: Webui demo video

For Developers

For an introduction to the ChatArena framework, please refer to this document. For a walkthrough of building a new environment, check Open In Colab

Here we provide a compact guide on minimal setup to run the game and some general advice on customization.

Key Concepts

  1. Arena: Arena encapsulates an environment and a collection of players. It drives the main loop of the game and provides HCI utilities like webUI, CLI, configuration loading and data storage.
  2. Environment: The environment stores the game state and executes game logics to make transitions between game states. It also renders observations for players, the observations are natural languages.
    1. The game state is not directly visible to the players. Players can only see the observations.
  3. Language Backend: Language backends are the source of language intelligence. It takes text (or collection of text) as input and returns text in response.
  4. Player: The player is an agent that plays the game. In RL terminology, it’s a policy, a stateless function mapping from observations to actions.

Run the Game with Python API

Load Arena from a config file -- here we use examples/nlp-classroom-3players.json in this repository as an example:

arena = Arena.from_config("examples/nlp-classroom-3players.json")
arena.run(num_steps=10)

Run the game in an interactive CLI interface:

arena.launch_cli()

Check out this video to learn how to use CLI: cli demo video A more detailed guide about how to run the main interaction loop with finer-grained control can be found here

General Customization Advice

  1. Arena: Overriding Arena basically means one is going to write their own main loop. This can allow different interaction interfaces or drive games in a more automated manner, for example, running an online RL training loop
  2. Environment: A new environment corresponds to a new game, one can define the game dynamics here with hard-coded rules or a mixture of rules and language backend.
  3. Backend: If one needs to change the way of formatting observations (in terms of messages) into queries for the language model, the backend should be overridden.
  4. Player: By default, when a new observation is fed, players will query the language backend and return the response as actions. But one can also customize the way that players are interacting with the language backend.

Creating your Custom Environment

You can define your own environment by extending the Environment class. Here are the general steps:

  1. Define the class by inheriting from a base class and setting type_name, then add the class to ALL_ENVIRONMENTS
  2. Initialize the class by defining __init__ method (its arguments will define the corresponding config) and initializing class attributes
  3. Implement game mechanics in methods step
  4. Handle game states and rewards by implementing methods such as reset, get_observation, is_terminal, and get_rewards
  5. Develop role description prompts (and a global prompt if necessary) for players using CLI or Web UI and save them to a config file.

We provide a detailed tutorial to demonstrate how to define a custom environment, using the Chameleon environment as example.

If you want to port an existing library's environment to ChatArena, check out PettingzooChess environment as an example.

List of Environments

Conversation

A multi-player language game environment that simulates a conversation.

  • NLP Classroom: a 3-player language game environment that simulates a classroom setting. The game is played in turns, and each turn a player can either ask a question or answer a question. The game ends when all players have asked and answered all questions.

Moderator Conversation

Based on conversation, but with a moderator that controls the game dynamics.

  • Rock-paper-scissors: a 2-player language game environment that simulates a rock-paper-scissors game with moderator conversation. Both player will act in parallel, and the game ends when one player wins 2 rounds.
  • Tic-tac-toe: a 2-player language game environment that simulates a tic-tac-toe game with moderator conversation. The game is played in turns, and each turn a player can either ask for a move or make a move. The game ends when one player wins or the board is full.

Chameleon

A multi-player social deduction game. There are two roles in the game, chameleon and non-chameleon. The topic of the secret word will be first revealed to all the players. Then the secret word will be revealed to non-chameleons. The chameleon does not know the secret word. The objective in the game depends on the role of the player:

  • If you are not a chameleon, your goal is to reveal the chameleon without exposing the secret word.
  • If you are a chameleon, your aim is to blend in with other players, avoid being caught, and figure out the secret word. There are three stages in the game:
  1. The giving clues stage: each player will describe the clues about the secret word.
  2. The accusation stage: In this stage, each player will vote for another player who is most likely the chameleon. The chameleon should vote for other players.
  3. The guess stage: If the accusation is correct, the chameleon should guess the secret word given the clues revealed by other players.

PettingZooChess

A two-player chess game environment that uses the PettingZoo Chess environment.

PettingZooTicTacTeo

A two-player tic-tac-toe game environment that uses the PettingZoo TicTacToe environment. Differing from the Moderator Conversation environment, this environment is driven by hard-coded rules rather than a LLM moderator.

Contributing

We welcome contributions to improve and extend ChatArena. Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch for your feature or bugfix.
  3. Commit your changes to the new branch.
  4. Create a pull request describing your changes.
  5. We will review your pull request and provide feedback or merge your changes.

Please ensure your code follows the existing style and structure.

Citation

If you find ChatArena useful for your research, please cite our repository (our arxiv paper is coming soon):

@software{ChatArena,
  author = {Yuxiang Wu, Zhengyao Jiang, Akbir Khan, Yao Fu, Laura Ruis, Edward Grefenstette, and Tim Rocktäschel},
  title = {ChatArena: Multi-Agent Language Game Environments for Large Language Models},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  version = {0.1},
  howpublished = {\url{https://github.com/chatarena/chatarena}},
}

Contact

If you have any questions or suggestions, feel free to open an issue or submit a pull request. You can also contact us on the Farama discord server- https://discord.gg/Vrtdmu9Y8Q

Happy chatting!

Sponsors

We would like to thank our sponsors for supporting this project:

More Repositories

1

Gymnasium

An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Python
6,383
star
2

PettingZoo

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
Python
2,553
star
3

HighwayEnv

A minimalist environment for decision-making in autonomous driving
Python
2,506
star
4

Arcade-Learning-Environment

The Arcade Learning Environment (ALE) -- a platform for AI research.
C++
2,106
star
5

Minigrid

Simple and easily configurable grid world environments for reinforcement learning
Python
2,051
star
6

ViZDoom

Reinforcement Learning environments based on the 1993 game Doom :godmode:
C++
1,723
star
7

D4RL

A collection of reference environments for offline reinforcement learning
Python
1,256
star
8

Metaworld

Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning
Python
1,178
star
9

Miniworld

Simple and easily configurable 3D FPS-game-like environments for reinforcement learning
Python
683
star
10

Gymnasium-Robotics

A collection of robotics simulation environments for reinforcement learning
Python
489
star
11

SuperSuit

A collection of wrappers for Gymnasium and PettingZoo environments (being merged into gymnasium.wrappers and pettingzoo.wrappers
Python
449
star
12

MO-Gymnasium

Multi-objective Gymnasium environments for reinforcement learning
Python
282
star
13

miniwob-plusplus

MiniWoB++: a web interaction benchmark for reinforcement learning
HTML
276
star
14

MicroRTS

A simple and highly efficient RTS-game-inspired environment for reinforcement learning
Java
271
star
15

Minari

A standard format for offline reinforcement learning datasets, with popular reference datasets and related utilities
Python
268
star
16

MicroRTS-Py

A simple and highly efficient RTS-game-inspired environment for reinforcement learning (formerly Gym-MicroRTS)
Python
219
star
17

MAgent2

An engine for high performance multi-agent environments with very large numbers of agents, along with a set of reference environments
C++
202
star
18

D4RL-Evaluations

Python
187
star
19

stable-retro

Retro games for Reinforcement Learning
C
146
star
20

Shimmy

An API conversion tool for popular external reinforcement learning environments
Python
129
star
21

AutoROM

A tool to automate installing Atari ROMs for the Arcade Learning Environment
Python
75
star
22

gym-examples

Example code for the Gym documentation
Python
68
star
23

momaland

Benchmarks for Multi-Objective Multi-Agent Decision Making
Python
58
star
24

Jumpy

On-the-fly conversions between Jax and NumPy tensors
Python
45
star
25

gym-docs

Code for Gym documentation website
41
star
26

Procgen2

Fast and procedurally generated side-scroller-game-like graphical environments (formerly Procgen)
C++
27
star
27

CrowdPlay

A web based platform for collecting human actions in reinforcement learning environments
Jupyter Notebook
26
star
28

TinyScaler

A small and fast image rescaling library with SIMD support
C
19
star
29

minari-dataset-generation-scripts

Scripts to recreate the D4RL datasets with Minari
Python
15
star
30

rlay

A relay between Gymnasium and any software
Rust
8
star
31

gymnasium-env-template

A template gymnasium environment for users to build upon
Jinja
7
star
32

A2Perf

A2Perf is a benchmark for evaluating agents on sequential decision problems that are relevant to the real world. This repository contains code for running and evaluating participant's submissions on the benchmark platform.
Python
4
star
33

farama.org

HTML
2
star
34

gym-notices

Python
1
star
35

Celshast

Sass
1
star
36

MPE2

A set of communication oriented environments
Python
1
star
37

Farama-Notifications

Allows for providing notifications on import to all Farama Packages
Python
1
star
38

a2perf-circuit-training

Python
1
star
39

a2perf-benchmark-submission

Python
1
star
40

a2perf-web-nav

HTML
1
star
41

a2perf-quadruped-locomotion

Python
1
star
42

a2perf-reliability-metrics

Python
1
star
43

a2perf-code-carbon

Jupyter Notebook
1
star