• Stars
    star
    129
  • Rank 279,262 (Top 6 %)
  • Language
    Python
  • License
    Other
  • Created over 3 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MMT-Retrieval: Image Retrieval and more using Multimodal Transformers (OSCAR, UNITER, M3P & Co)

This project provides an easy way to use the recent pre-trained multimodal Transformers like OSCAR, UNITER/ VILLA or M3P (multilingual!) for image search and more.

The code is primarily written for image-text retrieval. Still, many other Vision+Language tasks, beside image-text retrieval, should work out of the box using our code or require just small changes.

There is currently no unified approach for how the visual input is handled and each model uses their own slightly different approach. We provide a common interface for all models and support for multiple feature file formats. This greatly simplifies the process of running the models.

Our project allows you to run a model in a few lines of code and offers easy fine-tuning of your own custom models.

We also provide our fine-tuned image-text-retrieval models for download, so you can get directly started. Check out our example for Image Search on MSCOCO using our fine-tuned models here.

Citing & Authors

If you find this repository helpful, feel free to cite our publication Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval:

@article{geigle:2021:arxiv,
  author    = {Gregor Geigle and 
                Jonas Pfeiffer and 
                Nils Reimers and 
                Ivan Vuli\'{c} and 
                Iryna Gurevych},
  title     = {Retrieve Fast, Rerank Smart: Cooperative and Joint Approaches for Improved Cross-Modal Retrieval},
  journal   = {arXiv preprint},
  volume    = {abs/2103.11920},
  year      = {2021},
  url       = {http://arxiv.org/abs/2103.11920},
  archivePrefix = {arXiv},
  eprint    = {2103.11920}
}

Abstract: Current state-of-the-art approaches to cross-modal retrieval process text and visual input jointly, relying on Transformer-based architectures with cross-attention mechanisms that attend over all words and objects in an image. While offering unmatched retrieval performance, such models: \textbf{1)} are typically pretrained from scratch and thus less scalable, \textbf{2)} suffer from huge retrieval latency and inefficiency issues, which makes them impractical in realistic applications. To address these crucial gaps towards both improved and efficient cross-modal retrieval, we propose a novel fine-tuning framework which turns any pretrained text-image multi-modal model into an efficient retrieval model. The framework is based on a cooperative retrieve-and-rerank approach which combines: \textbf{1)} twin networks to separately encode all items of a corpus, enabling efficient initial retrieval, and \textbf{2)} a cross-encoder component for a more nuanced (i.e., smarter) ranking of the retrieved small set of items. We also propose to jointly fine-tune the two components with shared weights, yielding a more parameter-efficient model. Our experiments on a series of standard cross-modal retrieval benchmarks in monolingual, multilingual, and zero-shot setups, demonstrate improved accuracy and huge efficiency benefits over the state-of-the-art cross-encoders.

Don't hesitate to send me an e-mail or report an issue, if something is broken or if you have further questions or feedback.

Contact person: Gregor Geigle, [email protected]

https://www.ukp.tu-darmstadt.de/

https://www.tu-darmstadt.de/

This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

Installation

We recommend Python 3.6 or higher, PyTorch 1.6.0 or higher, transformers v4.1.1 or higher, and sentence-transformer 0.4.1 or higher up to 1.2.1.

Install with pip

Install mmt-retrieval with pip:

pip install mmt-retrieval

Install from sources

Alternatively, you can also clone the latest version from the repository and install it directly from the source code:

pip install -e .

PyTorch with CUDA If you want to use a GPU / CUDA, you must install PyTorch with the matching CUDA Version. Follow PyTorch - Get Started for further details how to install PyTorch.

Getting Started

With our repository, you can get started using the multimodal Transformers in a few lines of code. Check out our example for Image Search on MSCOCO using our fine-tuned models here. Or go along with the following steps to get started with your own project.

Select the Model

We provide our fine-tuned Image-Text Retrieval models for download. We also provide links to where to download the pre-trained models and models that are fine-tuned for other tasks.

Alternatively, you can fine-tune your own model, too. See here for more.

Our Fine-Tuned Image-Text Retrieval Models

We publish our jointly trained fine-tuned models. They can be used both to encode images and text in a multimodal embedding space and to cross-encode pairs for a pairwise similarity.

Model URL
OSCAR (Flickr30k) https://public.ukp.informatik.tu-darmstadt.de/reimers/mmt-retrieval/models/v1/oscar_join_flickr30k.zip
OSCAR (MSCOCO) https://public.ukp.informatik.tu-darmstadt.de/reimers/mmt-retrieval/models/v1/oscar_join_mscoco.zip
M3P (Multi30k - en, de fr, cs) https://public.ukp.informatik.tu-darmstadt.de/reimers/mmt-retrieval/models/v1/m3p_join_multi30k.zip

Other Pre-Trained or Fine-Tuned Transformer

We currently do not directly support downloading of the different pre-trained Transformer models. Please manually download them using the links in the respective repositories: OSCAR, UNITER/ VILLA, M3P. We present here examples on how to initialize your own models with the pre-trained Transformers.

OSCAR provides many already fine-tuned models for different tasks for download (see their MODEL_ZOO.md). We provide the ability to convert those models to our framework so you can quickly start using them.

from mmt_retrieval.util import convert_finetuned_oscar

downloaded_folder_path = ".../oscar-base-ir-finetune/checkpoint-29-132780"
converted_model = convert_finetuned_oscar(downloaded_folder_path)
converted_model.save("new_save_location_for_converted_model")

Step 0: Image Feature Pre-Processing

All currently supported models require a pre-processing step where we extract the regions of interest (which serve as image input analog to tokens for the language input) from the images using a Faster R-CNN object detection model.

Which detection model is needed, depends on the model that you are using. Check out our guide where we have gathered all needed information to get startet.

If available, we also point to already pre-processed image features that can be downloaded for a quicker start.

Loading Features and Image Input

We load image features in a dictionary-like object (model.image_dict) at the start. We support various different storage formats for the features (see the guide above). Each image is uniquely identified by its image id in this dictionary.

The advantage of the dictionary approach is that we can designate the image input by its id which is then internally resolved to the features.

Loading Features Just-In-Time (RAM Constraints)

The image features require a lot of additional memory. For this reason, we support just-in-time loading of the features from disc. This requires one feature file for each image. Many of the downloadable features are saved in a single file. We provide code to split those big files in separate files, one for each image.

from mmt_retrieval.util import split_oscar_image_feature_file_to_npz, split_tsv_features_to_npz

Step 1: Getting Started

The following is an example showcasing all steps needed to get started encoding multimodal inputs with our code.

from mmt_retrieval import MultimodalTransformer

# Loading a jointly trained model that can both embed and cross-encode multimodal input
model_path = "https://public.ukp.informatik.tu-darmstadt.de/reimers/mmt-retrieval/models/v1/oscar_join_flickr30k.zip"
model = MultimodalTransformer(model_name_or_path=model_path)

# Image ids are the unique identifier number (as string) of each image. If you save the image features separately for each image, this would be the file name
image_ids = ["0", "1", "5"]
# We must load the image features in some way before we can use the model
# Refer to Step 0 on more details for how to generate the features
feature_folder = "path/to/processed/features"
# Directly load the features from disc. Requires more memory. 
# Increase max_workers for more concurrent threads for faster loading with many features
# Remove select to load the entire folder
model.image_dict.load_features_folder(feature_folder, max_workers=1, select=image_ids)
## OR
# Only load the file paths so that features are loaded later just-in-time when there are required.
# Recommended with restricted memory and/ or a lot of images
# Remove select to load the entire folder
model.image_dict.load_file_names(feature_folder, select=image_ids)

sentences = ["The red brown fox jumped over the fence", "A dog being good"]

# Get Embeddings (as a list of numpy arrays)
sentence_embeddings = model.encode(sentences=sentences, convert_to_numpy=True) # convert_to_numpy=True is default
image_embeddings = model.encode(images=image_ids, convert_to_numpy=True)

# Get Pairwise Similarity Matrix (as a tensor)
similarities = model.encode(sentences=sentences, images=image_ids, output_value="logits", convert_to_tensor=True, cross_product_input=True)
similarities = similarities[:,-1].reshape(len(image_ids), len(sentences))

Experiments and Training

See our examples to learn how to fine-tune and evaluate the multimodal Transformers. We provide instructions for fine-tuning your own models with our image-text retrieval setup, show how to replicate our experiments, and give pointers on how to train your own models, potentially beyond image-text retrieval.

Expected Results with our Fine-Tuned Models

We report the JOIN+CO (,i.e., retrieve & re-rank with a jointly trained model) results of our published models Refer to our publications for more detailed results.

Image Retrieval for MSCOCO/ Flickr30k:

Model Dataset
R@1 R@5 R@10
oscar-join-mscoco MSCOCO (5k images) 54.7 81.3 88.9
oscar-join-flickr30k Flickr30k (1k images) 76.4 93.6 96.2

Multilingual Image Retrieval for Multi30k (in mR):

Model en de fr cs
m3p-join-multi30k 83.0 79.2 75.9 74

More Repositories

1

sentence-transformers

State-of-the-Art Text Embeddings
Python
15,213
star
2

EasyNMT

Easy to use, state-of-the-art Neural Machine Translation for 100+ languages
Python
1,164
star
3

emnlp2017-bilstm-cnn-crf

BiLSTM-CNN-CRF architecture for sequence tagging
Python
823
star
4

deeplearning4nlp-tutorial

Hands-on tutorial on deep learning with a special focus on Natural Language Processing (NLP)
Python
634
star
5

elmo-bilstm-cnn-crf

BiLSTM-CNN-CRF architecture for sequence tagging using ELMo representations.
Python
388
star
6

gpl

Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" https://arxiv.org/abs/2112.07577
Python
322
star
7

emnlp2017-relation-extraction

Context-Aware Representations for Knowledge Base Relation Extraction
Python
287
star
8

arxiv2018-xling-sentence-embeddings

Concatenated Power Mean Embeddings as Universal Cross-Lingual Sentence Representations
JavaScript
185
star
9

coling2018-graph-neural-networks-question-answering

Accompanying code for our COLING 2018 paper "Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering"
Python
173
star
10

plms-graph2text

Investigating Pretrained Language Models for Graph-to-Text Generation
Python
143
star
11

kg2text

Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs (authors' implementation for the TACL20 paper)
Python
94
star
12

acl2019-BERT-argument-classification-and-clustering

Python
83
star
13

argument-reasoning-comprehension-task

The Argument Reasoning Comprehension Task: Source codes & Datasets
Java
71
star
14

pytorch-bertflow

Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.
Python
69
star
15

acl2017-non-factoid-qa

Code for paper "End-to-End Non-Factoid Question Answering with an Interactive Visualization of Neural Attention Weights"
Python
67
star
16

acl2017-neural_end2end_am

Accompanying code for our ACL-2017 publication on Neural End-to-End Learning for Computational Argumentation Mining
Python
60
star
17

starsem2018-entity-linking

Accompanying code for our *SEM 2018 @ NAACL 2018 paper "Mixing Context Granularities for Improved Entity Linking on Question Answering Data across Entity Categories"
Python
59
star
18

fever-2018-team-athene

Python
46
star
19

nessie

Automatically detect errors in annotated corpora.
Python
46
star
20

mdl-stance-robustness

Multi-dataset stance detection and robustness experiments
Python
42
star
21

naacl18-multitask_argument_mining

Code for the paper "Multi-Task Learning for Argumentation Mining in Low-Resource Settings"
Python
40
star
22

semeval2017-scienceie

Code for keyphrase classification systems submitted to the SemEval 2017 shared task ScienceIE.
Python
36
star
23

starsem18-multimodalKB

Python
35
star
24

on-emergence

Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning
Python
34
star
25

acl2020-interactive-entity-linking

Python
33
star
26

useb

Heterogenous, Task- and Domain-Specific Benchmark for Unsupervised Sentence Embeddings used in the TSDAE paper: https://arxiv.org/abs/2104.06979.
Python
32
star
27

5pils

Code associated with the EMNLP 2024 Main paper: "Image, tell me your story!" Predicting the original meta-context of visual misinformation.
Python
31
star
28

emnlp2017-graphdocexplore

Accompanying code for our EMNLP 2017 publication "GraphDocExplore: A Framework for the Experimental Comparison of Graph-based Document Exploration Techniques"
JavaScript
29
star
29

StructAdapt

Structural Adapters in Pretrained Language Models for AMR-to-Text Generation (EMNLP 2021)
Python
29
star
30

coling2018_fake-news-challenge

Python
28
star
31

iwcs2017-answer-selection

Repository for the IWCS 2017 paper "Representation Learning for Answer Selection with LSTM-Based Importance Weighting"
Python
28
star
32

controlled-argument-generation

Controlling Argument Generation via topic, stance, and aspect
Python
28
star
33

acl2016-convincing-arguments

Code and data for ACL2016 article "Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM" by Ivan Habernal and Iryna Gurevych"
Java
28
star
34

lagonn

Source code and data for Like a Good Nearest Neighbor
Python
28
star
35

arxiv2018-bayesian-ensembles

Python
26
star
36

naacl2019-like-humans-visual-attacks

Python
26
star
37

refresh2018-predicting-trends-from-arxiv

Python
26
star
38

emnlp2018-activation-functions

Shell
26
star
39

emnlp2020-debiasing-unknown

Python
25
star
40

naacl2019-does-my-rebuttal-matter

Ruby
25
star
41

acl2024-dapr

Python
25
star
42

acl2017-interactive_summarizer

A general framework for Interactive Multi-Document Summarization
Python
24
star
43

adaptable-adapters

Python
23
star
44

e2e-nlg-challenge-2017

E2E NLG Challenge submission
Python
23
star
45

acl2020-confidence-regularization

Python
23
star
46

linspector

Python
23
star
47

MetaQA

MetaQA: Combining Expert Agents for Multi-Skill Question Answering
Python
22
star
48

emnlp2019-dualgraph

Enhancing AMR-to-Text Generation with Dual Graph Representations (implementation for the EMNLP-IJCNLP-2019 paper)
Python
22
star
49

arxiv2024-divergent-cot

Code for the 2024 arXiv publication "Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models"
Python
21
star
50

iclr2024-model-merging

This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.
Python
20
star
51

aaai2019-coala-cqa-answer-selection

Python
20
star
52

tacl2017-event-time-extraction

Event Time Extraction with a Decision Tree of Neural Classifiers
Python
19
star
53

tac2015-event-detection

Files for Event Nugget Detection systems submitted to TAC 2015 shared task on Event Nugget Detection
Java
19
star
54

coling2018-xling_argument_mining

Erlang
16
star
55

CARE

Project CARE
Vue
16
star
56

llm-roleplay

LLM Roleplay: Simulating Human-Chatbot Interaction
Python
16
star
57

eacl2017-oodFrameNetSRL

Implementation of a simple frame identification approach (SimpleFrameId) described in the paper "Out-of-domain FrameNet Semantic Role Labeling"
Python
15
star
58

SciGen

Python
15
star
59

acl2020-dialogue-coherence-assessment

Python
15
star
60

TWEAC-qa-agent-selection

Python
14
star
61

emnlp2020-multicqa

MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
Python
14
star
62

emnlp2024-code-prompting

Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024
Python
14
star
63

acl2024-triple-encoders

triple-encoders is a library for contextualizing distributed Sentence Transformers representations.
Python
14
star
64

lrec2018-live-blog-corpus

Python
13
star
65

EACL21-personalized-conversational-system

Python
13
star
66

emnlp2017-claim-identification

Source code repository for our EMNLP paper on cross-domain claim identification
Java
13
star
67

emnlp2018-question-answering-interface

Accompanying code for our EMNLP 2018 Demo paper "Interactive Instance-based Evaluation of Knowledge Base Question Answering"
JavaScript
13
star
68

emnlp2016-empirical-convincingness

Code and data for EMNLP2016 article "What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in Web argumentation" by Ivan Habernal and Iryna Gurevych
Java
13
star
69

germeval2017-sentiment-detection

Sentence Embeddings used in the GermEval-2017 Submission
Python
13
star
70

emnlp2018-april

Python
13
star
71

naacl2018-before-name-calling-habernal-et-al

Code and data for NAACL 2018 article "Before Name-calling: Dynamics and Triggers of Ad Hominem Fallacies in Web Argumentation" by Habernal et al.
Jupyter Notebook
13
star
72

tacl2018-preference-convincing

Experimental code for the paper 'Finding Convincing Arguments Using Scalable Bayesian Preference Learning'
TeX
12
star
73

emnlp2017-cmapsum-corpus

Accompanying code for our EMNLP 2017 publication "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps"
Java
12
star
74

nlpeer

Code associated with NLPeer: A unified resource for the study of peer review
Python
12
star
75

acl2019-GPPL-humour-metaphor

Python
12
star
76

AdaSent

This repository contains the code for the EMNLP'23 paper "AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification"
Python
12
star
77

incorporating-relevance

Code for "Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking" (https://arxiv.org/abs/2210.10695).
Python
12
star
78

coling2016-pcrf-seq2seq

An adaptation of MarMot morphological tagger for generic sequence-to-sequence tasks
Python
11
star
79

lsdsem2017-story-cloze

Files for the system submitted to the LSDSem2017 Workshop Story Cloze Test Challenge
Python
11
star
80

acl2024-ircoder

Data creation, training and eval scripts for the IRCoder paper
Python
11
star
81

acl2021-metaphor-generation-conceptual

This repository is for the paper Metaphor Generation with Conceptual Mappings (ACL 2021).
Python
10
star
82

acl2022-impli

10
star
83

argotario

Argotario: a multi-lingual serious game to tackle fallacious argumentation
JavaScript
10
star
84

framenet-tools

Annotate text with FrameNet frames and arguments.
Jupyter Notebook
10
star
85

intertext-graph

A general-purpose library for cross-document NLP modelling and analysis
Jupyter Notebook
10
star
86

coling2016-genetic-swarm-MDS

A general framework for Multi-Document Summarization based on Genetic Algorithm and Swarm Intelligence
Python
10
star
87

emnlp2021-prompt-ft-heuristics

Python
10
star
88

ijcnlp2017-cmaps

Repository for the IJCNLP 2017 paper "Concept-Map-Based Multi-Document Summarization using Concept Co-Reference Resolution and Global Importance Optimization"
Java
10
star
89

acl2016-supersense-embeddings

Source code, data, and supplementary materials for our ACL 2016 article
Python
10
star
90

maps

Multicultural Proverbs and Sayings
Python
9
star
91

mdswriter

A software for manually creating multi-document summarization corpora and a platform for developing complex annotation tasks spanning multiple steps.
Java
9
star
92

acl2016-optimizing-rouge

Code for our optimizer which takes scored sentences and extract the best summary according to the ROUGE approximation.
Python
9
star
93

emnlp2021-hypercoref-cdcr

Python
9
star
94

cdcr-beyond-corpus-tailored

📄🕸️ Generalizing Cross-Document Event Coreference Resolution Across Multiple Corpora
Python
9
star
95

emnlp2022-missing-counter-evidence

Source code and data of our paper "Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for Misinformation" (https://arxiv.org/abs/2210.13865, to appear at EMNLP 2022).
Python
9
star
96

codeclarqa

Asking Clarification Questions for Code Generation in General-Purpose Programming Language
Python
9
star
97

thesis2018-tk_mtl_sequence_tagging

Python
9
star
98

emnlp2018-novel-metaphors

Annotations and code for the EMNLP 2018 paper 'Weeding out Conventionalized Metaphors: A Corpus of Novel Metaphor Annotations'
Python
9
star
99

emnlp2018-argmin-commonsense-knowledge

Accompanying code for our paper "Frame- and Entity-Based Knowledge for Common-Sense Argumentative Reasoning" at the 5th Workshop on Argument Mining @ EMNLP 2018.
Python
9
star
100

acl2020-empowering-active-learning

Python
8
star