• Stars
    star
    8,226
  • Rank 4,260 (Top 0.09 %)
  • Language
    Jupyter Notebook
  • License
    BSD 3-Clause "New...
  • Created over 1 year ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

LAVIS - A One-stop Library for Language-Vision Intelligence



LAVIS - A Library for Language-Vision Intelligence

What's New: 🎉

  • [Model Release] May 2023, released implementation of InstructBLIP
    Paper, Project Page

A new vision-language instruction-tuning framework using BLIP-2 models, achieving state-of-the-art zero-shot generalization performance on a wide range of vision-language tasks.

A generic and efficient pre-training strategy that easily harvests development of pretrained vision models and large language models (LLMs) for vision-language pretraining. BLIP-2 beats Flamingo on zero-shot VQAv2 (65.0 vs 56.3), establishing new state-of-the-art on zero-shot captioning (on NoCaps 121.6 CIDEr score vs previous best 113.2). In addition, equipped with powerful LLMs (e.g. OPT, FlanT5), BLIP-2 also unlocks the new zero-shot instructed vision-to-language generation capabilities for various interesting applications!

  • Jan 2023, LAVIS is now available on PyPI for installation!
  • [Model Release] Dec 2022, released implementation of Img2LLM-VQA (CVPR 2023, "From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models", by Jiaxian Guo et al)
    Paper, Project Page, Open In Colab

A plug-and-play module that enables off-the-shelf use of Large Language Models (LLMs) for visual question answering (VQA). Img2LLM-VQA surpasses Flamingo on zero-shot VQA on VQAv2 (61.9 vs 56.3), while in contrast requiring no end-to-end training!

  • [Model Release] Oct 2022, released implementation of PNP-VQA (EMNLP Findings 2022, "Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training", by Anthony T.M.H. et al),
    Paper, Project Page, Open In Colab)

A modular zero-shot VQA framework that requires no PLMs training, achieving SoTA zero-shot VQA performance.

Table of Contents

Introduction

LAVIS is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets. It features a unified interface design to access

  • 10+ tasks (retrieval, captioning, visual question answering, multimodal classification etc.);
  • 20+ datasets (COCO, Flickr, Nocaps, Conceptual Commons, SBU, etc.);
  • 30+ pretrained weights of state-of-the-art foundation language-vision models and their task-specific adaptations, including ALBEF, BLIP, ALPRO, CLIP.



Key features of LAVIS include:

  • Unified and Modular Interface: facilitating to easily leverage and repurpose existing modules (datasets, models, preprocessors), also to add new modules.

  • Easy Off-the-shelf Inference and Feature Extraction: readily available pre-trained models let you take advantage of state-of-the-art multimodal understanding and generation capabilities on your own data.

  • Reproducible Model Zoo and Training Recipes: easily replicate and extend state-of-the-art models on existing and new tasks.

  • Dataset Zoo and Automatic Downloading Tools: it can be a hassle to prepare the many language-vision datasets. LAVIS provides automatic downloading scripts to help prepare a large variety of datasets and their annotations.

The following table shows the supported tasks, datasets and models in our library. This is a continuing effort and we are working on further growing the list.

Tasks Supported Models Supported Datasets
Image-text Pre-training ALBEF, BLIP COCO, VisualGenome, SBU ConceptualCaptions
Image-text Retrieval ALBEF, BLIP, CLIP COCO, Flickr30k
Text-image Retrieval ALBEF, BLIP, CLIP COCO, Flickr30k
Visual Question Answering ALBEF, BLIP VQAv2, OKVQA, A-OKVQA
Image Captioning BLIP COCO, NoCaps
Image Classification CLIP ImageNet
Natural Language Visual Reasoning (NLVR) ALBEF, BLIP NLVR2
Visual Entailment (VE) ALBEF SNLI-VE
Visual Dialogue BLIP VisDial
Video-text Retrieval BLIP, ALPRO MSRVTT, DiDeMo
Text-video Retrieval BLIP, ALPRO MSRVTT, DiDeMo
Video Question Answering (VideoQA) BLIP, ALPRO MSRVTT, MSVD
Video Dialogue VGD-GPT AVSD
Multimodal Feature Extraction ALBEF, CLIP, BLIP, ALPRO customized
Text-to-image Generation [COMING SOON]

Installation

  1. (Optional) Creating conda environment
conda create -n lavis python=3.8
conda activate lavis
  1. install from PyPI
pip install salesforce-lavis
  1. Or, for development, you may build from source
git clone https://github.com/salesforce/LAVIS.git
cd LAVIS
pip install -e .

Getting Started

Model Zoo

Model zoo summarizes supported models in LAVIS, to view:

from lavis.models import model_zoo
print(model_zoo)
# ==================================================
# Architectures                  Types
# ==================================================
# albef_classification           ve
# albef_feature_extractor        base
# albef_nlvr                     nlvr
# albef_pretrain                 base
# albef_retrieval                coco, flickr
# albef_vqa                      vqav2
# alpro_qa                       msrvtt, msvd
# alpro_retrieval                msrvtt, didemo
# blip_caption                   base_coco, large_coco
# blip_classification            base
# blip_feature_extractor         base
# blip_nlvr                      nlvr
# blip_pretrain                  base
# blip_retrieval                 coco, flickr
# blip_vqa                       vqav2, okvqa, aokvqa
# clip_feature_extractor         ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50
# clip                           ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50
# gpt_dialogue                   base

Let’s see how to use models in LAVIS to perform inference on example data. We first load a sample image from local.

import torch
from PIL import Image
# setup device to use
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# load sample image
raw_image = Image.open("docs/_static/merlion.png").convert("RGB")

This example image shows Merlion park (source), a landmark in Singapore.

Image Captioning

In this example, we use the BLIP model to generate a caption for the image. To make inference even easier, we also associate each pre-trained model with its preprocessors (transforms), accessed via load_model_and_preprocess().

import torch
from lavis.models import load_model_and_preprocess
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# loads BLIP caption base model, with finetuned checkpoints on MSCOCO captioning dataset.
# this also loads the associated image processors
model, vis_processors, _ = load_model_and_preprocess(name="blip_caption", model_type="base_coco", is_eval=True, device=device)
# preprocess the image
# vis_processors stores image transforms for "train" and "eval" (validation / testing / inference)
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
# generate caption
model.generate({"image": image})
# ['a large fountain spewing water into the air']

Visual question answering (VQA)

BLIP model is able to answer free-form questions about images in natural language. To access the VQA model, simply replace the name and model_type arguments passed to load_model_and_preprocess().

from lavis.models import load_model_and_preprocess
model, vis_processors, txt_processors = load_model_and_preprocess(name="blip_vqa", model_type="vqav2", is_eval=True, device=device)
# ask a random question.
question = "Which city is this photo taken?"
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
question = txt_processors["eval"](question)
model.predict_answers(samples={"image": image, "text_input": question}, inference_method="generate")
# ['singapore']

Unified Feature Extraction Interface

LAVIS provides a unified interface to extract features from each architecture. To extract features, we load the feature extractor variants of each model. The multimodal feature can be used for multimodal classification. The low-dimensional unimodal features can be used to compute cross-modal similarity.

from lavis.models import load_model_and_preprocess
model, vis_processors, txt_processors = load_model_and_preprocess(name="blip_feature_extractor", model_type="base", is_eval=True, device=device)
caption = "a large fountain spewing water into the air"
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
text_input = txt_processors["eval"](caption)
sample = {"image": image, "text_input": [text_input]}

features_multimodal = model.extract_features(sample)
print(features_multimodal.multimodal_embeds.shape)
# torch.Size([1, 12, 768]), use features_multimodal[:,0,:] for multimodal classification tasks

features_image = model.extract_features(sample, mode="image")
features_text = model.extract_features(sample, mode="text")
print(features_image.image_embeds.shape)
# torch.Size([1, 197, 768])
print(features_text.text_embeds.shape)
# torch.Size([1, 12, 768])

# low-dimensional projected features
print(features_image.image_embeds_proj.shape)
# torch.Size([1, 197, 256])
print(features_text.text_embeds_proj.shape)
# torch.Size([1, 12, 256])
similarity = features_image.image_embeds_proj[:,0,:] @ features_text.text_embeds_proj[:,0,:].t()
print(similarity)
# tensor([[0.2622]])

Load Datasets

LAVIS inherently supports a wide variety of common language-vision datasets by providing automatic download tools to help download and organize these datasets. After downloading, to load the datasets, use the following code:

from lavis.datasets.builders import dataset_zoo
dataset_names = dataset_zoo.get_names()
print(dataset_names)
# ['aok_vqa', 'coco_caption', 'coco_retrieval', 'coco_vqa', 'conceptual_caption_12m',
#  'conceptual_caption_3m', 'didemo_retrieval', 'flickr30k', 'imagenet', 'laion2B_multi',
#  'msrvtt_caption', 'msrvtt_qa', 'msrvtt_retrieval', 'msvd_caption', 'msvd_qa', 'nlvr',
#  'nocaps', 'ok_vqa', 'sbu_caption', 'snli_ve', 'vatex_caption', 'vg_caption', 'vg_vqa']

After downloading the images, we can use load_dataset() to obtain the dataset.

from lavis.datasets.builders import load_dataset
coco_dataset = load_dataset("coco_caption")
print(coco_dataset.keys())
# dict_keys(['train', 'val', 'test'])
print(len(coco_dataset["train"]))
# 566747
print(coco_dataset["train"][0])
# {'image': <PIL.Image.Image image mode=RGB size=640x480>,
#  'text_input': 'A woman wearing a net on her head cutting a cake. ',
#  'image_id': 0}

If you already host a local copy of the dataset, you can pass in the vis_path argument to change the default location to load images.

coco_dataset = load_dataset("coco_caption", vis_path=YOUR_LOCAL_PATH)

Jupyter Notebook Examples

See examples for more inference examples, e.g. captioning, feature extraction, VQA, GradCam, zeros-shot classification.

Resources and Tools

  • Benchmarks: see Benchmark for instructions to evaluate and train supported models.
  • Dataset Download and Browsing: see Dataset Download for instructions and automatic tools on download common language-vision datasets.
  • GUI Demo: to run the demo locally, run bash run_scripts/run_demo.sh and then follow the instruction on the prompts to view in browser. A web demo is coming soon.

Documentations

For more details and advanced usages, please refer to documentation.

Ethical and Responsible Use

We note that models in LAVIS provide no guarantees on their multimodal abilities; incorrect or biased predictions may be observed. In particular, the datasets and pretrained models utilized in LAVIS may contain socioeconomic biases which could result in misclassification and other unwanted behaviors such as offensive or inappropriate speech. We strongly recommend that users review the pre-trained models and overall system in LAVIS before practical adoption. We plan to improve the library by investigating and mitigating these potential biases and inappropriate behaviors in the future.

Technical Report and Citing LAVIS

You can find more details in our technical report.

If you're using LAVIS in your research or applications, please cite using this BibTeX:

@inproceedings{li-etal-2023-lavis,
    title = "{LAVIS}: A One-stop Library for Language-Vision Intelligence",
    author = "Li, Dongxu  and
      Li, Junnan  and
      Le, Hung  and
      Wang, Guangsen  and
      Savarese, Silvio  and
      Hoi, Steven C.H.",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-demo.3",
    pages = "31--41",
    abstract = "We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, video-language models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this technical report, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks.",
}
}

Contact us

If you have any questions, comments or suggestions, please do not hesitate to contact us at [email protected].

License

BSD 3-Clause License

More Repositories

1

CodeGen

CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
Python
4,594
star
2

BLIP

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Jupyter Notebook
3,879
star
3

akita

🚀 State Management Tailored-Made for JS Applications
TypeScript
3,442
star
4

Merlion

Merlion: A Machine Learning Framework for Time Series Intelligence
Python
3,232
star
5

ja3

JA3 is a standard for creating SSL client fingerprints in an easy to produce and shareable way.
Python
2,502
star
6

CodeT5

Home of CodeT5: Open Code LLMs for Code Understanding and Generation
Python
2,437
star
7

decaNLP

The Natural Language Decathlon: A Multitask Challenge for NLP
Python
2,301
star
8

TransmogrifAI

TransmogrifAI (pronounced trăns-mŏgˈrə-fī) is an AutoML library for building modular, reusable, strongly typed machine learning workflows on Apache Spark with minimal hand-tuning
Scala
2,227
star
9

policy_sentry

IAM Least Privilege Policy Generator
Python
1,938
star
10

awd-lstm-lm

LSTM and QRNN Language Model Toolkit for PyTorch
Python
1,900
star
11

cloudsplaining

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report.
JavaScript
1,865
star
12

ctrl

Conditional Transformer Language Model for Controllable Generation
Python
1,766
star
13

lwc

⚡️ LWC - A Blazing Fast, Enterprise-Grade Web Components Foundation
JavaScript
1,537
star
14

WikiSQL

A large annotated semantic parsing corpus for developing natural language interfaces.
HTML
1,520
star
15

sloop

Kubernetes History Visualization
Go
1,396
star
16

CodeTF

CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
Python
1,375
star
17

ALBEF

Code for ALBEF: a new vision-language pre-training method
Python
1,276
star
18

pytorch-qrnn

PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM
Python
1,255
star
19

ai-economist

Foundation is a flexible, modular, and composable framework to model socio-economic behaviors and dynamics with both agents and governments. This framework can be used in conjunction with reinforcement learning to learn optimal economic policies, as done by the AI Economist (https://www.einstein.ai/the-ai-economist).
Python
964
star
20

jarm

Python
914
star
21

design-system-react

Salesforce Lightning Design System for React
JavaScript
896
star
22

tough-cookie

RFC6265 Cookies and CookieJar for Node.js
TypeScript
858
star
23

reactive-grpc

Reactive stubs for gRPC
Java
814
star
24

OmniXAI

OmniXAI: A Library for eXplainable AI
Jupyter Notebook
782
star
25

xgen

Salesforce open-source LLMs with 8k sequence length.
Python
704
star
26

vulnreport

Open-source pentesting management and automation platform by Salesforce Product Security
HTML
593
star
27

UniControl

Unified Controllable Visual Generation Model
Python
577
star
28

hassh

HASSH is a network fingerprinting standard which can be used to identify specific Client and Server SSH implementations. The fingerprints can be easily stored, searched and shared in the form of a small MD5 fingerprint.
Python
525
star
29

progen

Official release of the ProGen models
Python
518
star
30

Argus

Time series monitoring and alerting platform.
Java
501
star
31

base-components-recipes

A collection of base component recipes for Lightning Web Components on Salesforce Platform
JavaScript
496
star
32

matchbox

Write PyTorch code at the level of individual examples, then run it efficiently on minibatches.
Python
488
star
33

PCL

PyTorch code for "Prototypical Contrastive Learning of Unsupervised Representations"
Python
483
star
34

cove

Python
470
star
35

CodeRL

This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
Python
465
star
36

DialogStudio

DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI
Python
431
star
37

warp-drive

Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
Python
429
star
38

observable-membrane

A Javascript Membrane implementation using Proxies to observe mutation on an object graph
TypeScript
368
star
39

PyRCA

PyRCA: A Python Machine Learning Library for Root Cause Analysis
Python
367
star
40

DeepTime

PyTorch code for Learning Deep Time-index Models for Time Series Forecasting (ICML 2023)
Python
322
star
41

ULIP

Python
316
star
42

logai

LogAI - An open-source library for log analytics and intelligence
Python
298
star
43

MultiHopKG

Multi-hop knowledge graph reasoning learned via policy gradient with reward shaping and action dropout
Jupyter Notebook
290
star
44

CodeGen2

CodeGen2 models for program synthesis
Python
269
star
45

provis

Official code repository of "BERTology Meets Biology: Interpreting Attention in Protein Language Models."
Python
269
star
46

jaxformer

Minimal library to train LLMs on TPU in JAX with pjit().
Python
255
star
47

EDICT

Jupyter Notebook
247
star
48

causalai

Salesforce CausalAI Library: A Fast and Scalable framework for Causal Analysis of Time Series and Tabular Data
Jupyter Notebook
223
star
49

ETSformer

PyTorch code for ETSformer: Exponential Smoothing Transformers for Time-series Forecasting
Python
221
star
50

themify

👨‍🎨 CSS Themes Made Easy. A robust, opinionated solution to manage themes in your web application
TypeScript
216
star
51

rules_spring

Bazel rule for building Spring Boot apps as a deployable jar
Starlark
215
star
52

simpletod

Official repository for "SimpleTOD: A Simple Language Model for Task-Oriented Dialogue"
Python
212
star
53

TabularSemanticParsing

Translating natural language questions to a structured query language
Jupyter Notebook
210
star
54

grpc-java-contrib

Useful extensions for the grpc-java library
Java
208
star
55

GeDi

GeDi: Generative Discriminator Guided Sequence Generation
Python
207
star
56

aws-allowlister

Automatically compile an AWS Service Control Policy that ONLY allows AWS services that are compliant with your preferred compliance frameworks.
Python
207
star
57

mirus

Mirus is a cross data-center data replication tool for Apache Kafka
Java
200
star
58

generic-sidecar-injector

A generic framework for injecting sidecars and related configuration in Kubernetes using Mutating Webhook Admission Controllers
Go
200
star
59

CoST

PyTorch code for CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting (ICLR 2022)
Python
196
star
60

factCC

Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper
Python
192
star
61

runway-browser

Interactive visualization framework for Runway models of distributed systems
JavaScript
188
star
62

glad

Global-Locally Self-Attentive Dialogue State Tracker
Python
186
star
63

ALPRO

Align and Prompt: Video-and-Language Pre-training with Entity Prompts
Python
177
star
64

densecap

Jupyter Notebook
176
star
65

cloud-guardrails

Rapidly apply hundreds of security controls in Azure
HCL
174
star
66

booksum

Python
167
star
67

kafka-junit

This library wraps Kafka's embedded test cluster, allowing you to more easily create and run integration tests using JUnit against a "real" kafka server running within the context of your tests. No need to stand up an external kafka cluster!
Java
166
star
68

sfdx-lwc-jest

Run Jest against LWC components in SFDX workspace environment
JavaScript
156
star
69

ctrl-sum

Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paper
Python
144
star
70

cos-e

Commonsense Explanations Dataset and Code
Python
143
star
71

hierarchicalContrastiveLearning

Python
140
star
72

secure-filters

Anti-XSS Security Filters for EJS and More
JavaScript
138
star
73

metabadger

Prevent SSRF attacks on AWS EC2 via automated upgrades to the more secure Instance Metadata Service v2 (IMDSv2).
Python
129
star
74

dockerfile-image-update

A tool that helps you get security patches for Docker images into production as quickly as possible without breaking things
Java
127
star
75

Converse

Python
125
star
76

refocus

The Go-To Platform for Visualizing Service Health
JavaScript
125
star
77

CoMatch

Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
Python
117
star
78

BOLAA

Python
114
star
79

bazel-eclipse

This repo holds two IDE projects. One is the Eclipse Feature for developing Bazel projects in Eclipse. The Bazel Eclipse Feature supports importing, building, and testing Java projects that are built using the Bazel build system. The other is the Bazel Java Language Server, which is a build integration for IDEs such as VS Code.
Java
108
star
80

botsim

BotSIM - a data-efficient end-to-end Bot SIMulation toolkit for evaluation, diagnosis, and improvement of commercial chatbots
Jupyter Notebook
108
star
81

near-membrane

JavaScript Near Membrane Library that powers Lightning Locker Service
TypeScript
107
star
82

rng-kbqa

Python
105
star
83

MUST

PyTorch code for MUST
Python
103
star
84

fsnet

Python
101
star
85

bro-sysmon

How to Zeek Sysmon Logs!
Zeek
101
star
86

Timbermill

A better logging service
Java
99
star
87

best

🏆 Delightful Benchmarking & Performance Testing
TypeScript
95
star
88

eslint-config-lwc

Opinionated ESLint configurations for LWC projects
JavaScript
93
star
89

craft

CRAFT removes the language barrier to create Kubernetes Operators.
Go
91
star
90

AuditNLG

AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness
Python
90
star
91

online_conformal

Methods for online conformal prediction.
Jupyter Notebook
90
star
92

lobster-pot

Scans every git push to your Github organisations to find unwanted secrets.
Go
88
star
93

violet-conversations

Sophisticated Conversational Applications/Bots
JavaScript
84
star
94

ml4ir

Machine Learning for Information Retrieval
Jupyter Notebook
84
star
95

apex-mockery

Lightweight mocking library in Apex
Apex
83
star
96

fast-influence-functions

Python
80
star
97

MoPro

MoPro: Webly Supervised Learning
Python
79
star
98

TaiChi

Open source library for few shot NLP
Python
79
star
99

helm-starter-istio

An Istio starter template for Helm
Shell
78
star
100

QAConv

This repository maintains the QAConv dataset, a question-answering dataset on informative conversations including business emails, panel discussions, and work channels.
Python
77
star