LAMM
LAMM (pronounced as /lรฆm/, means cute lamb to show appreciation to LLaMA), is a growing open-source community aimed at helping researchers and developers quickly train and evaluate Multi-modal Large Language Models (MLLM), and futher build multi-modal AI agents capable of bridging the gap between ideas and execution, enabling seamless interaction between humans and AI machines.
Updates
๐ [2023-11]
- ChEF and Octavius are available!
- ChEF and Octavius released on Arxiv!
- Camera ready version of LAMM is available on Arxiv.
๐ [2023-09]
- LAMM is accepted by NeurIPS2023 Datasets & Benchmark Track! See you in December!'
- Training LAMM on V100 or RTX3090 is available! Finetuning LLaMA2 is online.'
- Our demo moved to OpenXLab.
๐ [2023-07]
- Checkpoints & Leaderboard of LAMM on huggingface updated on new code base.
- Evaluation code for both 2D and 3D tasks are ready.
- Command line demo tools updated.
๐ [2023-06]
- Watch demo video for LAMM at YouTube or Bilibili!
- Full paper with Appendix is available on Arxiv.
- LAMM dataset released on Huggingface & OpenDataLab for Research community!',
- LAMM code is available for Research community!
Paper List
Publications
Preprints
Citation
LAMM
@article{yin2023lamm,
title={LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark},
author={Yin, Zhenfei and Wang, Jiong and Cao, Jianjian and Shi, Zhelun and Liu, Dingning and Li, Mukai and Sheng, Lu and Bai, Lei and Huang, Xiaoshui and Wang, Zhiyong and others},
journal={arXiv preprint arXiv:2306.06687},
year={2023}
}
ChEF
@misc{shi2023chef,
title={ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models},
author={Zhelun Shi and Zhipin Wang and Hongxing Fan and Zhenfei Yin and Lu Sheng and Yu Qiao and Jing Shao},
year={2023},
eprint={2311.02692},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Octavius
@misc{chen2023octavius,
title={Octavius: Mitigating Task Interference in MLLMs via MoE},
author={Zeren Chen and Ziqin Wang and Zhen Wang and Huayang Liu and Zhenfei Yin and Si Liu and Lu Sheng and Wanli Ouyang and Yu Qiao and Jing Shao},
year={2023},
eprint={2311.02684},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Get Started
Please see tutorial for the basic usage of this repo.
License
The project is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.