• Stars
    star
    746
  • Rank 60,383 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 9 months ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"

Static Badge arXiv preprint Dataset video Static Badge

Demo username & password: osprey


A part of Along the River During the Qingming Festival (ζΈ…ζ˜ŽδΈŠζ²³ε›Ύ)

Spirited Away (εƒδΈŽεƒε―»)

Updates πŸ“Œ

[2024/1/15]πŸ”₯ We released the evaluation code.

[2023/12/29]πŸ”₯ We released the training code and Osprey-724K dataset.

[2023/12/18]πŸ”₯ We released the code, osprey-7b model and online demo for Osprey.

What is Osprey πŸ‘€

Osprey is a mask-text instruction tuning approach that extends MLLMs by incorporating pixel-wise mask regions into language instructions, enabling fine-grained visual understanding. Based on input mask region, Osprey generate the semantic descriptions including short description and detailed description.

Our Osprey can seamlessly integrate with SAM in point-prompt, box-prompt and segmentation everything modes to generate the semantics associated with specific parts or objects.

Watch Video Demo πŸŽ₯

Try Our Demo πŸ•ΉοΈ

Online demo

Click πŸ‘‡ to try our demo online.

web demo

username: osprey
password: osprey

Point

Box

Everything

Offline demo

πŸ’» requirments: For this demo, it needs about 17GB GPU memory for Osprey(15GB) and SAM(2GB).

  1. First install Gradio-Osprey-Demo.
  2. Install Segment Anything.
pip install git+https://github.com/facebookresearch/segment-anything.git
  1. Download ViT-B SAM model to checkpoints.

  2. Run app.py.

cd demo
python app.py --model checkpoint/osprey_7b

Install πŸ› οΈ

  1. Clone this repository and navigate to Osprey folder
git clone https://github.com/CircleRadon/Osprey.git
cd Osprey
  1. Install packages
conda create -n osprey python=3.10 -y
conda activate osprey
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
  1. Install additional packages for training cases
pip install -e ".[train]"
pip install flash-attn --no-build-isolation

Dataset 🌟

The all datasets for training can be found in Dataset preparation.

Osprey-724K: πŸ€—Hugging Face

Osprey-724K is an instruction dataset with mask-text pairs, containing around 724K GPT-generated multimodal dialogues to encourage MLLMs for fine-grained pixel-level image understanding. It contains object-level, part-level and additional instruction samples for robustness and flexibility.

Training πŸš€

  • Stage1: Image-Text Alignment Pre-training

    • The pretrained projector weights for Convnext-large-CLIP can be found in projector weights.
  • Stage2: Mask-Text Alignment Pre-training

    • Download vicuna-7b-v1.5.
    • Download projector weights trained in stage1: projector weights.
    • Set model_name_or_path in stage2.sh to the path of vicuna-7b-v1.5.
    • Set pretrain_mm_mlp_adapter in stage2.sh to the path of mm_projector.
    • Set vision_tower in stage2.sh to the path of Convnext-large-CLIP-model.
    • Run sh scripts/stage2.sh.
  • Stage3: End-to-End Fine-tuning

    • Set model_name_or_path in stage2.sh to the path of stage2 checkpoint.
    • Set vision_tower in stage2.sh to the path of Convnext-large-CLIP-model.
    • Run sh scripts/stage3.sh.

Checkpoints πŸ€–

  1. Convnext-large-CLIP-modelπŸ€—: model
  2. Osprey-7b modelπŸ€—: model

Then change the "mm_vision_tower" in config.json of Osprey-7b model to the path of Convnext-large-CLIP-model.

Evaluation πŸ”Ž

See evaluation for details.

TODO List πŸ“

  • Release the checkpoints, inference codes and demo.
  • Release the dataset and training scripts.
  • Release the evaluation code.
  • Release the code for data generation pipeline.

Acknowledgement πŸ’Œ

  • LLaVA-v1.5: the codebase we built upon.
  • SAM: the demo uses the segmentation result from SAM as the input of Osprey.

BibTeX πŸ–ŠοΈ

@misc{Osprey,
  title={Osprey: Pixel Understanding with Visual Instruction Tuning},
  author={Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang and Jianke Zhu},
  year={2023},
  eprint={2312.10032},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}