• Stars
    star
    428
  • Rank 101,481 (Top 2 %)
  • Language
    Python
  • Created over 1 year ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!

Multi-Modality Arena ๐Ÿš€

Multi-Modality Arena is an evaluation platform for large multi-modality models. Following Fastchat, two anonymous models side-by-side are compared on a visual question-answering task. We release the Demo and welcome the participation of everyone in this evaluation initiative.

โš”๏ธ LVLM Arena arXiv GitHub Stars๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

LVLM-eHub - An Evaluation Benchmark for Large Vision-Language Models ๐Ÿš€

LVLM-eHub is a comprehensive evaluation benchmark for publicly available large multimodal models (LVLM). It extensively evaluates $8$ LVLMs in terms of $6$ categories of multimodal capabilities with $47$ datasets and $1$ arena online platform.

Update

  • Jun. 15, 2023. We release [LVLM-eHub], an evaluation benchmark for large vision-language models. The code is coming soon.
  • Jun. 8, 2023. Thanks, Dr. Zhang, the author of VPGTrans, for his corrections. The authors of VPGTrans mainly come from NUS and Tsinghua University. We previously had some minor issues when re-implementing VPGTrans, but we found that its performance is actually better. For more model authors, please contact me for discussion at the Email. Also, please follow our model ranking list, where more accurate results will be available.
  • May. 22, 2023. Thanks, Dr. Ye, the author of mPLUG-Owl, for his corrections. We fix some minor issues in our implementation of mPLIG-Owl.

Supported Multi-modality Models

The following models are involving in randomized battles currently,

More details about these models can be found at ./model_detail/.model.jpg. We will try to schedule computing resources to host more multi-modality models in the arena.

Contact US at Wechat

If you are interested in any pieces of our VLarena platform, feel free to join the Wechat group.

Installation

  1. Create conda environment
conda create -n arena python=3.10
conda activate arena
  1. Install Packages required to run the controller and server
pip install numpy gradio uvicorn fastapi
  1. Then for each model, they may require conflicting versions of python packages, we recommend creating a specific environment for each model based on their GitHub repo.

Launch a Demo

To serve using the web UI, you need three main components: web servers that interface with users, model workers that host two or more models, and a controller to coordinate the webserver and model workers.

Here are the commands to follow in your terminal:

Launch the controller

python controller.py

This controller manages the distributed workers.

Launch the model worker(s)

python model_worker.py --model-name SELECTED_MODEL --device TARGET_DEVICE

Wait until the process finishes loading the model and you see "Uvicorn running on ...". The model worker will register itself to the controller. For each model worker, you need to specify the model and the device you want to use.

Launch the Gradio web server

python server_demo.py

This is the user interface that users will interact with.

By following these steps, you will be able to serve your models using the web UI. You can open your browser and chat with a model now. If the models do not show up, try to reboot the gradio web server.

Acknowledgement

We express our gratitude to the esteemed team at ChatBot Arena and their paper Judging LLM-as-a-judge for their influential work, which served as inspiration for our LVLM evaluation endeavors. We would also like to extend our sincere appreciation to the providers of LVLMs, whose valuable contributions have significantly contributed to the progress and advancement of large vision-language models. Finally, we thank the providers of datasets used in our LVLM-eHub.

Term of Use

The project is an experimental research tool for non-commercial purposes only. It has limited safeguards and may generate inappropriate content. It cannot be used for anything illegal, harmful, violent, racist, or sexual.

More Repositories

1

InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. ๆŽฅ่ฟ‘GPT-4o่กจ็Žฐ็š„ๅผ€ๆบๅคšๆจกๆ€ๅฏน่ฏๆจกๅž‹
Python
5,753
star
2

LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Python
5,717
star
3

DragGAN

Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" ๏ผˆDragGAN ๅ…จๅŠŸ่ƒฝๅฎž็Žฐ๏ผŒๅœจ็บฟDemo๏ผŒๆœฌๅœฐ้ƒจ็ฝฒ่ฏ•็”จ๏ผŒไปฃ็ ใ€ๆจกๅž‹ๅทฒๅ…จ้ƒจๅผ€ๆบ๏ผŒๆ”ฏๆŒWindows, macOS, Linux๏ผ‰
Python
4,996
star
4

InternGPT

InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (ๆ”ฏๆŒDragGANใ€ChatGPTใ€ImageBindใ€SAM็š„ๅœจ็บฟDemo็ณป็ปŸ)
Python
3,198
star
5

Ask-Anything

[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Python
2,984
star
6

InternImage

[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
Python
2,502
star
7

InternVideo

[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Python
1,392
star
8

VisionLLM

VisionLLM Series
Python
874
star
9

VideoMamba

[ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding
Python
787
star
10

OmniQuant

[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.
Python
691
star
11

VideoMAEv2

[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Python
486
star
12

DCNv4

[CVPR 2024] Deformable Convolution v4
Python
463
star
13

all-seeing

[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"
Python
452
star
14

GITM

Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory
445
star
15

Vision-RWKV

Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures
Python
352
star
16

CaFo

[CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners
Python
344
star
17

PonderV2

PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
Python
311
star
18

LAMM

[NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents
Python
296
star
19

UniFormerV2

[ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
Python
280
star
20

unmasked_teacher

[ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models
Python
276
star
21

OmniCorpus

OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text
Python
259
star
22

HumanBench

This repo is official implementation of HumanBench (CVPR2023)
Python
231
star
23

Instruct2Act

Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
Python
223
star
24

EfficientQAT

EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
Python
198
star
25

gv-benchmark

General Vision Benchmark, GV-B, a project from OpenGVLab
Python
189
star
26

ControlLLM

ControlLLM: Augment Language Models with Tools by Searching on Graphs
Python
181
star
27

InternVideo2

152
star
28

UniHCP

Official PyTorch implementation of UniHCP
Python
149
star
29

efficient-video-recognition

Python
114
star
30

SAM-Med2D

Official implementation of SAM-Med2D
Jupyter Notebook
114
star
31

EgoVideo

[CVPR 2024 Champions] Solutions for EgoVis Chanllenges in CVPR 2024
Jupyter Notebook
103
star
32

DiffRate

[ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging techniques, while incorporating a differentiable compression rate.
Jupyter Notebook
86
star
33

MMT-Bench

ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Python
85
star
34

Awesome-DragGAN

Awesome-DragGAN: A curated list of papers, tutorials, repositories related to DragGAN
75
star
35

MM-NIAH

This is the official implementation of the paper "Needle In A Multimodal Haystack"
Python
70
star
36

M3I-Pretraining

69
star
37

STM-Evaluation

Python
69
star
38

MUTR

[AAAI 2024] Referred by Multi-Modality: A Unified Temporal Transformers for Video Object Segmentation
Python
65
star
39

LCL

Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning
Python
63
star
40

ChartAst

ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.
Python
60
star
41

LORIS

Long-Term Rhythmic Video Soundtracker, ICML2023
Python
54
star
42

DDPS

Official Implementation of "Denoising Diffusion Semantic Segmentation with Mask Prior Modeling"
Python
53
star
43

Awesome-LLM4Tool

A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
52
star
44

PIIP

NeurIPS 2024 Spotlight โญ๏ธ Parameter-Inverted Image Pyramid Networks (PIIP)
Python
51
star
45

InternVL-MMDetSeg

Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed
Jupyter Notebook
50
star
46

GUI-Odyssey

GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos.
Python
47
star
47

Siamese-Image-Modeling

[CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation Learning
Python
33
star
48

De-focus-Attention-Networks

Learning 1D Causal Visual Representation with De-focus Attention Networks
Python
28
star
49

Multitask-Model-Selector

Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector
Python
27
star
50

Official-ConvMAE-Det

Python
13
star
51

perception_test_iccv2023

Champion Solutions repository for Perception Test challenges in ICCV2023 workshop.
Python
13
star
52

opengvlab.github.io

12
star
53

MovieMind

9
star
54

EmbodiedGPT

5
star
55

DriveMLM

3
star
56

.github

2
star