๐
Multi-Modality Arena
Multi-Modality Arena is an evaluation platform for large multi-modality models. Following Fastchat, two anonymous models side-by-side are compared on a visual question-answering task. We release the Demo and welcome the participation of everyone in this evaluation initiative.
๐
LVLM-eHub - An Evaluation Benchmark for Large Vision-Language Models LVLM-eHub is a comprehensive evaluation benchmark for publicly available large multimodal models (LVLM). It extensively evaluates
Update
- Jun. 15, 2023. We release [LVLM-eHub], an evaluation benchmark for large vision-language models. The code is coming soon.
- Jun. 8, 2023. Thanks, Dr. Zhang, the author of VPGTrans, for his corrections. The authors of VPGTrans mainly come from NUS and Tsinghua University. We previously had some minor issues when re-implementing VPGTrans, but we found that its performance is actually better. For more model authors, please contact me for discussion at the Email. Also, please follow our model ranking list, where more accurate results will be available.
- May. 22, 2023. Thanks, Dr. Ye, the author of mPLUG-Owl, for his corrections. We fix some minor issues in our implementation of mPLIG-Owl.
Supported Multi-modality Models
The following models are involving in randomized battles currently,
- KAUST/MiniGPT-4
- Salesforce/BLIP2
- Salesforce/InstructBLIP
- DAMO Academy/mPLUG-Owl
- NTU/Otter
- University of Wisconsin-Madison/LLaVA
- Shanghai AI Lab/llama_adapter_v2
- NUS/VPGTrans
More details about these models can be found at ./model_detail/.model.jpg
. We will try to schedule computing resources to host more multi-modality models in the arena.
Contact US at Wechat
If you are interested in any pieces of our VLarena platform, feel free to join the Wechat group.
Installation
- Create conda environment
conda create -n arena python=3.10
conda activate arena
- Install Packages required to run the controller and server
pip install numpy gradio uvicorn fastapi
- Then for each model, they may require conflicting versions of python packages, we recommend creating a specific environment for each model based on their GitHub repo.
Launch a Demo
To serve using the web UI, you need three main components: web servers that interface with users, model workers that host two or more models, and a controller to coordinate the webserver and model workers.
Here are the commands to follow in your terminal:
Launch the controller
python controller.py
This controller manages the distributed workers.
Launch the model worker(s)
python model_worker.py --model-name SELECTED_MODEL --device TARGET_DEVICE
Wait until the process finishes loading the model and you see "Uvicorn running on ...". The model worker will register itself to the controller. For each model worker, you need to specify the model and the device you want to use.
Launch the Gradio web server
python server_demo.py
This is the user interface that users will interact with.
By following these steps, you will be able to serve your models using the web UI. You can open your browser and chat with a model now. If the models do not show up, try to reboot the gradio web server.
Acknowledgement
We express our gratitude to the esteemed team at ChatBot Arena and their paper Judging LLM-as-a-judge for their influential work, which served as inspiration for our LVLM evaluation endeavors. We would also like to extend our sincere appreciation to the providers of LVLMs, whose valuable contributions have significantly contributed to the progress and advancement of large vision-language models. Finally, we thank the providers of datasets used in our LVLM-eHub.
Term of Use
The project is an experimental research tool for non-commercial purposes only. It has limited safeguards and may generate inappropriate content. It cannot be used for anything illegal, harmful, violent, racist, or sexual.