• Stars
    star
    148
  • Rank 249,983 (Top 5 %)
  • Language
    Python
  • License
    Other
  • Created over 1 year ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2024] ViT-Lens: Towards Omni-modal Representations

ViT-Lens

Project Homepage arXiv arXiv Static Badge

TL;DR: We present ViT-Lens, an approach for advancing omni-modal representation learning by leveraging a pretrained-ViT with modality Lens to comprehend diverse modalities.

vit-lens-omni-modal

vit-lens-capabilities

📢 News

  • [2023.12.13] We release training code and models of ViT-Lens.
  • [2023.11.28] We upgrade ViT-Lens, with added modalities and applications. Stay tuned for the release of code and models [arXiv paper].
  • [2023.08.22] We release the arXiv paper, inference codes and checkpoints for 3D [arXiv paper].

📝 Todo

  • Models for more modalities.
  • Code for ViT-Lens integration with InstructBLIP and SEED.
  • Online demo for ViT-Lens integration with InstructBLIP and SEED.

🔨 Installation

conda create -n vit-lens python=3.8.8 -y
conda activate vit-lens

# Install pytorch>=1.9.0 
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch -y

# Install ViT-Lens
git clone https://github.com/TencentARC/ViT-Lens.git
cd ViT-Lens/
pip install -e vitlens/
pip install -r vitlens/requirements-training.txt
Training/Inference on OpenShape Triplets on 3D point clouds: environment setup (click to expand)
conda create -n vit-lens python=3.8.8 -y
conda activate vit-lens
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch -y
conda install -c dglteam/label/cu113 dgl -y

# Install ViT-Lens
git clone https://github.com/TencentARC/ViT-Lens.git
cd ViT-Lens/
pip install -e vitlens/
pip install -r vitlens/requirements-training.txt

🔍 ViT-Lens Model

MN40 SUN.D NYU.D Audioset VGGSound ESC50 Clotho AudioCaps TAG.M IN.EEG Download
ImageBind(Huge) - 35.1 54.0 17.6 27.8 66.9 6.0/28.4 9.3/42.3 - - -
ViT-Lens-L 80.6 52.2 68.5 26.7 31.7 75.9 8.1/31.2 14.4/54.9 65.8 42.7 vitlensL

We release a one-stop ViT-Lens-L model (based on Large ViT) and show its performance on ModelNet40 (MN40, top1 accuracy), SUN RGBD Depth-only (SUN.D, top1 accuracy), NYUv2 Depth-only (NYU.D, top1 accuracy), Audioset (Audioset, mAP), VGGSound (VGGSound, top1 accuracy), ESC50 (ESC50, top1 accuracy), Clotho (Clotho, R@1/R@10), AudioCaps (AudioCaps, R@1/R@10), TAG.M (Touch-and-Go Material, top1 accuracy) and IN.EEG (ImageNet EEG, top1 accuracy). ViT-Lens consistently outperforms ImageBind.

For more model checkpoints (trained on different data or with better performance), please refer to MODEL_ZOO.md.

📚 Usage

  • You may set your paths for you own project in constants.py.
  • We provide an API (source file) and provide an example (here) for reference. You can use ViT-Lens to extract and compare features across modalities:
    import os
    import torch
    
    from open_clip import ModalityType
    from mm_vit_lens import ViTLens
    
    here = os.path.abspath(os.path.dirname(__file__))
    
    model = ViTLens(modality_loaded=[ModalityType.IMAGE, ModalityType.AUDIO, ModalityType.TEXT, ModalityType.PC])
    
    device = "cuda:0" if torch.cuda.is_available() else "cpu"
    model = model.to(device)
    
    # Example 1
    images = [
        os.path.join(here, "assets/example/image_bird.jpg"),
        os.path.join(here, "assets/example/image_fire.jpg"),
        os.path.join(here, "assets/example/image_dog.jpg"),
        os.path.join(here, "assets/example/image_beach.jpg"),
    ]
    audios = [
        os.path.join(here, "assets/example/audio_chirping_birds.flac"),
        os.path.join(here, "assets/example/audio_crackling_fire.flac"),
        os.path.join(here, "assets/example/audio_dog.flac"),
        os.path.join(here, "assets/example/audio_sea_wave.flac"),
    ]
    texts = [
        "a bird",
        "crackling fire",
        "a dog",
        "sea wave",
    ]
    inputs_1 = {
        ModalityType.IMAGE: images,
        ModalityType.AUDIO: audios,
        ModalityType.TEXT: texts,
    }
    
    with torch.no_grad(), torch.cuda.amp.autocast():
        outputs_1 = model.encode(inputs_1, normalize=True)
    
    sim_at = torch.softmax(100 * outputs_1[ModalityType.AUDIO] @ outputs_1[ModalityType.TEXT].T, dim=-1)
    print(
        "Audio x Text:\n",
        sim_at
    )
    # Expected output
    # Audio x Text:
    #  tensor([[9.9998e-01, 9.3977e-07, 2.1545e-05, 9.3642e-08],
    #         [3.8017e-09, 1.0000e+00, 3.1551e-09, 6.9498e-10],
    #         [9.4895e-03, 1.3270e-06, 9.9051e-01, 2.5545e-07],
    #         [9.7020e-06, 6.4767e-07, 2.8860e-06, 9.9999e-01]], device='cuda:0')
    
    sim_ai = torch.softmax(100 * outputs_1[ModalityType.AUDIO] @ outputs_1[ModalityType.IMAGE].T, dim=-1)
    print(
        "Audio x Image:\n",
        sim_ai
    )
    # Expected output
    # Audio x Image:
    #  tensor([[1.0000e+00, 1.5798e-06, 2.0614e-06, 1.6502e-07],
    #         [2.3712e-09, 1.0000e+00, 1.4446e-10, 1.2260e-10],
    #         [4.9333e-03, 1.2942e-02, 9.8212e-01, 1.8582e-06],
    #         [6.8347e-04, 1.0547e-02, 1.3476e-05, 9.8876e-01]], device='cuda:0')
    
    
    # Example 2
    pcs = [
        os.path.join(here, "assets/example/pc_car_0260.npy"),
        os.path.join(here, "assets/example/pc_guitar_0243.npy"),
        os.path.join(here, "assets/example/pc_monitor_0503.npy"),
        os.path.join(here, "assets/example/pc_person_0102.npy"),
        os.path.join(here, "assets/example/pc_piano_0286.npy"),
    ]
    text_pcs = ["a car", "a guitar", "a monitor", "a person", "a piano"]
    inputs_2 = {
        ModalityType.PC: pcs,
        ModalityType.TEXT: text_pcs,
    }
    with torch.no_grad(), torch.cuda.amp.autocast():
        outputs_2 = model.encode(inputs_2, normalize=True)
    sim_pc_t = torch.softmax(100 * outputs_2[ModalityType.PC] @ outputs_2[ModalityType.TEXT].T, dim=-1)
    print(
        "PointCould x Text:\n",
        sim_pc_t
    )
    # Expected output:
    # PointCould x Text:
    #  tensor([[9.9945e-01, 1.0483e-05, 1.4904e-04, 2.3988e-05, 3.7041e-04],
    #         [1.2574e-09, 1.0000e+00, 6.8450e-09, 2.6463e-08, 3.3659e-07],
    #         [6.2730e-09, 1.9918e-06, 9.9999e-01, 6.7161e-06, 4.9279e-06],
    #         [1.8846e-06, 7.4831e-06, 4.4594e-06, 9.9998e-01, 7.9092e-06],
    #         [1.2218e-08, 1.5571e-06, 1.8991e-07, 1.7521e-08, 1.0000e+00]],
    #        device='cuda:0')

📦 Datasets

Please refer to DATASETS.md for dataset preparation.

🚀 Training & Inference

Please refer to TRAIN_INFERENCE.md for details.

🧩 Model Zoo

Please refer to MODEL_ZOO.md for details.

👀 Visualization of Demo

[ Plug ViT-Lens into SEED: Video Demo ]vitlens-seed.video
[ Plug ViT-Lens into SEED: enabling compound Any-to-Image Generation ]vitlens-seed
[ Plug ViT-Lens into InstructBLIP: Video Demo ]insblip.video
[ Plug ViT-Lens into InstructBLIP: enabling Any instruction following ]vitlens.instblip2
[ Plug ViT-Lens into InstructBLIP: enabling Any instruction following ]mmvitlens.instblip3
[ Example: Plug 3D lens to LLM ]plant
[ Example: Plug 3D lens to LLM ]piano

🎓 Citation

If you find our work helps, please give us a star🌟 and consider citing:

@article{lei2023vitlens,
  title={Vit-lens: Towards omni-modal representations},
  author={Lei, Weixian and Ge, Yixiao and Zhang, Jianfeng and Sun, Dylan and Yi, Kun and Shan, Ying and Shou, Mike Zheng},
  journal={arXiv preprint arXiv:2308.10185},
  year={2023}
}
@article{lei2023vitlens-2,
  title={ViT-Lens-2: Gateway to Omni-modal Intelligence},
  author={Lei, Weixian and Ge, Yixiao and Yi, Kun and Zhang, Jianfeng and Gao, Difei and Sun, Dylan and Ge, Yuying and Shan, Ying and Shou, Mike Zheng},
  journal={arXiv preprint arXiv:2311.16081},
  year={2023}
}

✉️ Contact

Questions and discussions are welcome via [email protected] or open an issue.

🙏 Acknowledgement

This codebase is based on open_clip, ULIP, OpenShape and LAVIS. Big thanks to the authors for their awesome contributions!

More Repositories

1

GFPGAN

GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
Python
35,397
star
2

PhotoMaker

PhotoMaker [CVPR 2024]
Jupyter Notebook
9,198
star
3

T2I-Adapter

T2I-Adapter
Python
3,377
star
4

InstantMesh

InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
Python
2,928
star
5

BrushNet

[ECCV 2024] The official implementation of paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion"
Python
1,298
star
6

MotionCtrl

Official Code for MotionCtrl [SIGGRAPH 2024]
Python
1,247
star
7

MasaCtrl

[ICCV 2023] Consistent Image Synthesis and Editing
Python
699
star
8

SEED-Story

SEED-Story: Multimodal Long Story Generation with Large Language Model
Python
657
star
9

LLaMA-Pro

[ACL 2024] Progressive LLaMA with Block Expansion.
Python
459
star
10

Mix-of-Show

NeurIPS 2023, Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Python
383
star
11

Open-MAGVIT2

Open-MAGVIT2: Democratizing Autoregressive Visual Generation
Python
376
star
12

AnimeSR

Codes for "AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos"
Python
325
star
13

VQFR

ECCV 2022, Oral, VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder
Python
320
star
14

CustomNet

Python
258
star
15

SmartEdit

Official code of SmartEdit [CVPR-2024 Highlight]
Python
214
star
16

UMT

UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or highlight detection results.
Python
186
star
17

MM-RealSR

Codes for "Metric Learning based Interactive Modulation for Real-World Super-Resolution"
Python
152
star
18

MCQ

Official code for "Bridging Video-text Retrieval with Multiple Choice Questions", CVPR 2022 (Oral).
Python
136
star
19

DeSRA

Official codes for DeSRA (ICML 2023)
Python
123
star
20

FAIG

NeurIPS 2021, Spotlight, Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution
Python
118
star
21

ArcNerf

Nerf and extensions in all
Jupyter Notebook
106
star
22

ST-LLM

[ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"
Python
96
star
23

SurfelNeRF

SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic Reconstruction of Indoor Scenes
76
star
24

RepSR

Codes for "RepSR: Training Efficient VGG-style Super-Resolution Networks with Structural Re-Parameterization and Batch Normalization"
74
star
25

mllm-npu

mllm-npu: training multimodal large language models on Ascend NPUs
Python
67
star
26

HOSNeRF

HOSNeRF: Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video
Python
65
star
27

FastRealVSR

Codes for "Mitigating Artifacts in Real-World Video Super-Resolution Models"
59
star
28

ConMIM

Official codes for ConMIM (ICLR 2023)
Python
57
star
29

GVT

Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".
Python
54
star
30

TVTS

Turning to Video for Transcript Sorting
Jupyter Notebook
44
star
31

BEBR

Official code for "Binary embedding based retrieval at Tencent"
Python
42
star
32

ViSFT

Python
33
star
33

pi-Tuning

Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.
Python
32
star
34

FLM

Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)
Python
31
star
35

Efficient-VSR-Training

Codes for "Accelerating the Training of Video Super-Resolution"
30
star
36

DTN

Official code for "Dynamic Token Normalization Improves Vision Transformer", ICLR 2022.
Python
27
star
37

OpenCompatible

OpenCompatible provides a standard compatible training benchmark, covering practical training scenarios.
Python
24
star
38

BTS

BTS: A Bi-lingual Benchmark for Text Segmentation in the Wild
23
star
39

SGAT4PASS

This is the official implementation of the paper SGAT4PASS: Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation (IJCAI 2023)
Python
23
star
40

SFDA

Python
20
star
41

TaCA

Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".
15
star
42

Plot2Code

Python
14
star
43

common_trainer

Common template for pytorch project. Easy to extent and modify for new project.
Python
12
star
44

TransFusion

The code repo for the ACM MM paper: TransFusion: Multi-Modal Fusion for Video Tag Inference viaTranslation-based Knowledge Embedding.
9
star
45

BasicVQ-GEN

7
star
46

ArcVis

Visualization of 3d and 2d components interactively.
Jupyter Notebook
6
star
47

VTLayout

3
star