• Stars
    star
    501
  • Rank 85,060 (Top 2 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Meshed-Memory Transformer for Image Captioning. CVPR 2020

M²: Meshed-Memory Transformer

This repository contains the reference code for the paper Meshed-Memory Transformer for Image Captioning (CVPR 2020).

Please cite with the following BibTeX:

@inproceedings{cornia2020m2,
  title={{Meshed-Memory Transformer for Image Captioning}},
  author={Cornia, Marcella and Stefanini, Matteo and Baraldi, Lorenzo and Cucchiara, Rita},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

Meshed-Memory Transformer

Environment setup

Clone the repository and create the m2release conda environment using the environment.yml file:

conda env create -f environment.yml
conda activate m2release

Then download spacy data by executing the following command:

python -m spacy download en

Note: Python 3.6 is required to run our code.

Data preparation

To run the code, annotations and detection features for the COCO dataset are needed. Please download the annotations file annotations.zip and extract it.

Detection features are computed with the code provided by [1]. To reproduce our result, please download the COCO features file coco_detections.hdf5 (~53.5 GB), in which detections of each image are stored under the <image_id>_features key. <image_id> is the id of each COCO image, without leading zeros (e.g. the <image_id> for COCO_val2014_000000037209.jpg is 37209), and each value should be a (N, 2048) tensor, where N is the number of detections.

Evaluation

To reproduce the results reported in our paper, download the pretrained model file meshed_memory_transformer.pth and place it in the code folder.

Run python test.py using the following arguments:

Argument Possible values
--batch_size Batch size (default: 10)
--workers Number of workers (default: 0)
--features_path Path to detection features file
--annotation_folder Path to folder with COCO annotations

Expected output

Under output_logs/, you may also find the expected output of the evaluation code.

Training procedure

Run python train.py using the following arguments:

Argument Possible values
--exp_name Experiment name
--batch_size Batch size (default: 10)
--workers Number of workers (default: 0)
--m Number of memory vectors (default: 40)
--head Number of heads (default: 8)
--warmup Warmup value for learning rate scheduling (default: 10000)
--resume_last If used, the training will be resumed from the last checkpoint.
--resume_best If used, the training will be resumed from the best checkpoint.
--features_path Path to detection features file
--annotation_folder Path to folder with COCO annotations
--logs_folder Path folder for tensorboard logs (default: "tensorboard_logs")

For example, to train our model with the parameters used in our experiments, use

python train.py --exp_name m2_transformer --batch_size 50 --m 40 --head 8 --warmup 10000 --features_path /path/to/features --annotation_folder /path/to/annotations

Sample Results

References

[1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.

More Repositories

1

mammoth

An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of Dark Experience for General Continual Learning
Python
448
star
2

dress-code

Dress Code: High-Resolution Multi-Category Virtual Try-On. ECCV 2022
Python
379
star
3

multimodal-garment-designer

This is the official repository for the paper "Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing". ICCV 2023
Python
373
star
4

show-control-and-tell

Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions. CVPR 2019
Python
282
star
5

novelty-detection

Latent space autoregression for novelty detection.
Python
196
star
6

art2real

Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation. CVPR 2019
Python
76
star
7

VKD

PyTorch code for ECCV 2020 paper: "Robust Re-Identification by Multiple Views Knowledge Distillation"
Python
72
star
8

VATr

Python
65
star
9

STAGE_action_detection

Code of the STAGE module for video action detection
Python
50
star
10

pacscore

Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation. CVPR 2023
Python
45
star
11

open-fashion-clip

This is the official repository for the paper "OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data". ICIAP 2023
Python
45
star
12

human-pose-annotation-tool

Human Pose Annotation Tool
Python
38
star
13

speaksee

PyTorch library for Visual-Semantic tasks
Python
28
star
14

mil4wsi

DAS-MIL: Distilling Across Scales for MILClassification of Histological WSIs
Python
26
star
15

camel

CaMEL: Mean Teacher Learning for Image Captioning. ICPR 2022
Python
26
star
16

TransformerBasedGestureRecognition

Python
23
star
17

RefiNet

Python
21
star
18

mvad-names-dataset

M-VAD Names Dataset. Multimedia Tools and Applications (2019)
Python
21
star
19

DynamicConv-agent

PyTorch code for BMVC 2019 paper: Embodied Vision-and-Language Navigation with Dynamic Convolutional Filters
C++
21
star
20

perceive-transform-and-act

PyTorch code for the paper: "Perceive, Transform, and Act: Multi-Modal Attention Networks for Vision-and-Language Navigation"
C++
18
star
21

mcmr

PyTorch code for 3DV 2021 paper: "Multi-Category Mesh Reconstruction From Image Collections"
Python
17
star
22

LiDER

Official implementation of "On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning"
Python
16
star
23

MaPeT

Learning to Mask and Permute Visual Tokens for Vision Transformer Pre-Training
Python
15
star
24

Ti-MGD

This is the official repository for the paper "Multimodal-Conditioned Latent Diffusion Models for Fashion Image Editing".
15
star
25

awesome-human-visual-attention

This repository contains a curated list of research papers and resources focusing on saliency and scanpath prediction, human attention, human visual search.
14
star
26

PMA-Net

With a Little Help from your own Past: Prototypical Memory Networks for Image Captioning. ICCV 2023
13
star
27

LoCoNav

Python
13
star
28

focus-on-impact

Python
13
star
29

safe-clip

This is the official repository for the paper "Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models".
13
star
30

HWD

Python
12
star
31

CSL-TAL

Pytorch code for ECCVW 2022 paper "Consistency-based Self-supervised Learning for Temporal Anomaly Localization"
Python
11
star
32

ADCC

Python
10
star
33

RMSNet_Soccer

PyTorch code for RMS-Net
Python
8
star
34

CSSL

Code implementation for "Continual Semi-Supervised Learning through Contrastive Interpolation Consistency"
Python
6
star
35

aimagelab-srv

AImageLab-SRV wiki, support, code snippets and best practices.
5
star
36

rpe_spdh

PyTorch code for IEEE RA-L paper: "Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from Depth Maps"
Python
5
star
37

COCOFake

5
star
38

vffc

Python
3
star
39

aidlda_tutorial

A tutorial on PyTorch - AI-DLDA 2018
Python
3
star
40

LAM

The Ludovico Antonio Muratori (LAM) dataset is the largest line-level HTR dataset to date and contains 25,823 lines from Italian ancient manuscripts edited by a single author over 60 years. The dataset comes in two configurations: a basic splitting and a date-based splitting which takes into account the age of the author. The first setting is intended to study HTR on ancient documents in Italian, while the second focuses on the ability of HTR systems to recognize text written by the same writer in time periods for which training data are not available.
3
star
41

unveiling-the-truth

2
star
42

cvcs2023

1
star
43

FourBi

Python
1
star
44

DefConvs_HTR

Boosting modern and historical handwritten text recognition with deformable convolutions (ICPR20, IJDAR22)
Python
1
star
45

Teddy

Python
1
star