• Stars
    star
    146
  • Rank 251,239 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Video Contrastive Learning with Global Context, ICCVW 2021

Video Contrastive Learning with Global Context (VCLR)

This is the official PyTorch implementation of our VCLR paper.

@article{kuang2021vclr,
  title={Video Contrastive Learning with Global Context},
  author={Haofei Kuang, Yi Zhu, Zhi Zhang, Xinyu Li, Joseph Tighe, Sรถren Schwertfeger, Cyrill Stachniss, Mu Li},
  journal={arXiv preprint arXiv:2108.02722},
  year={2021}
}

Install dependencies

  • environments
    conda create --name vclr python=3.7
    conda activate vclr
    conda install numpy scipy scikit-learn matplotlib scikit-image
    pip install torch==1.7.1 torchvision==0.8.2
    pip install opencv-python tqdm termcolor gcc7 ffmpeg tensorflow==1.15.2
    pip install mmcv-full==1.2.7

Prepare datasets

Please refer to PREPARE_DATA to prepare the datasets.

Prepare pretrained MoCo weights

In this work, we follow SeCo and use the pretrained weights of MoCov2 as initialization.

cd ~
git clone https://github.com/amazon-research/video-contrastive-learning.git
cd video-contrastive-learning
mkdir pretrain && cd pretrain
wget https://dl.fbaipublicfiles.com/moco/moco_checkpoints/moco_v2_200ep/moco_v2_200ep_pretrain.pth.tar
cd ..

Self-supervised pretraining

bash shell/main_train.sh

Checkpoints will be saved to ./results

Downstream tasks

Linear evaluation

In order to evaluate the effectiveness of self-supervised learning, we conduct a linear evaluation (probing) on Kinetics400 dataset. Basically, we first extract features from the pretrained weight and then train a SVM classifier to see how the learned features perform.

bash shell/eval_svm.sh
  • Results

    Arch Pretrained dataset Epoch Pretrained model Acc. on K400
    ResNet50 Kinetics400 400 Download link 64.1

Video retrieval

bash shell/eval_retrieval.sh
  • Results

    Arch Pretrained dataset Epoch Pretrained model R@1 on UCF101 R@1 on HMDB51
    ResNet50 Kinetics400 400 Download link 70.6 35.2
    ResNet50 UCF101 400 Download link 46.8 17.6

Action recognition & action localization

Here, we use mmaction2 for both tasks. If you are not familiar with mmaction2, you can read the official documentation.

Installation

  • Step1: Install mmaction2

    To make sure the results can be reproduced, please use our forked version of mmaction2 (version: 0.11.0):

    conda activate vclr
    cd ~
    git clone https://github.com/KuangHaofei/mmaction2
    
    cd mmaction2
    pip install -v -e .
  • Step2: Prepare the pretrained weights

    Our pretrained backbone have different format with the backbone of mmaction2, it should be transferred to mmaction2 format. We provide the transferred version of our K400 pretrained weights, TSN and TSM. We also provide the script for transferring weights, you can find it here.

    Moving the pretrained weights to checkpoints directory:

    cd ~/mmaction2
    mkdir checkpoints
    wget https://haofeik-data.s3.amazonaws.com/VCLR/pretrained/vclr_mm.pth
    wget https://haofeik-data.s3.amazonaws.com/VCLR/pretrained/vclr_mm_tsm.pth

Action recognition

Make sure you have prepared the dataset and environments following the previous step. Now suppose you are in the root directory of mmaction2, follow the subsequent steps to fine tune the TSN or TSM models for action recognition.

For each dataset, the train and test setting can be found in the configuration files.

  • UCF101

    • config file: tsn_ucf101.py
    • train command:
      ./tools/dist_train.sh configs/recognition/tsn/vclr/tsn_ucf101.py 8 \
        --validate --seed 0 --deterministic
    • test command:
      python tools/test.py configs/recognition/tsn/vclr/tsn_ucf101.py \
        work_dirs/vclr/ucf101/latest.pth \
        --eval top_k_accuracy mean_class_accuracy --out result.json
  • HMDB51

    • config file: tsn_hmdb51.py
    • train command:
      ./tools/dist_train.sh configs/recognition/tsn/vclr/tsn_hmdb51.py 8 \
        --validate --seed 0 --deterministic
    • test command:
      python tools/test.py configs/recognition/tsn/vclr/tsn_hmdb51.py \
        work_dirs/vclr/hmdb51/latest.pth \
        --eval top_k_accuracy mean_class_accuracy --out result.json
  • SomethingSomethingV2: TSN

    • config file: tsn_sthv2.py
    • train command:
      ./tools/dist_train.sh configs/recognition/tsn/vclr/tsn_sthv2.py 8 \
        --validate --seed 0 --deterministic
    • test command:
      python tools/test.py configs/recognition/tsn/vclr/tsn_sthv2.py \
        work_dirs/vclr/tsn_sthv2/latest.pth \
        --eval top_k_accuracy mean_class_accuracy --out result.json
  • SomethingSomethingV2: TSM

    • config file: tsm_sthv2.py
    • train command:
      ./tools/dist_train.sh configs/recognition/tsm/vclr/tsm_sthv2.py 8 \
        --validate --seed 0 --deterministic
    • test command:
      python tools/test.py configs/recognition/tsm/vclr/tsm_sthv2.py \
        work_dirs/vclr/tsm_sthv2/latest.pth \
        --eval top_k_accuracy mean_class_accuracy --out result.json
  • ActivityNet

    • config file: tsn_activitynet.py
    • train command:
      ./tools/dist_train.sh configs/recognition/tsn/vclr/tsn_activitynet.py 8 \
        --validate --seed 0 --deterministic
    • test command:
      python tools/test.py configs/recognition/tsn/vclr/tsn_activitynet.py \
        work_dirs/vclr/tsn_activitynet/latest.pth \
        --eval top_k_accuracy mean_class_accuracy --out result.json
  • Results

    Arch Dataset Finetuned model Acc.
    TSN UCF101 Download link 85.6
    TSN HMDB51 Download link 54.1
    TSN SomethingSomethingV2 Download link 33.3
    TSM SomethingSomethingV2 Download link 52.0
    TSN ActivityNet Download link 71.9

Action localization

  • Step 1: Follow the previous section, suppose the finetuned model is saved at work_dirs/vclr/tsn_activitynet/latest.pth

  • Step 2: Extract ActivityNet features

    cd ~/mmaction2/tools/data/activitynet/
    
    python tsn_feature_extraction.py --data-prefix /home/ubuntu/data/ActivityNet/rawframes \
      --data-list /home/ubuntu/data/ActivityNet/anet_train_video.txt \
      --output-prefix /home/ubuntu/data/ActivityNet/rgb_feat \
      --modality RGB --ckpt /home/ubuntu/mmaction2/work_dirs/vclr/tsn_activitynet/latest.pth
    
    python tsn_feature_extraction.py --data-prefix /home/ubuntu/data/ActivityNet/rawframes \
      --data-list /home/ubuntu/data/ActivityNet/anet_val_video.txt \
      --output-prefix /home/ubuntu/data/ActivityNet/rgb_feat \
      --modality RGB --ckpt /home/ubuntu/mmaction2/work_dirs/vclr/tsn_activitynet/latest.pth
    
    python activitynet_feature_postprocessing.py \
      --rgb /home/ubuntu/data/ActivityNet/rgb_feat \
      --dest /home/ubuntu/data/ActivityNet/mmaction_feat

    Note, the root directory of ActivityNey is /home/ubuntu/data/ActivityNet/ in our case. Please replace it according to your real directory.

  • Step 3: Train and test the BMN model

    • train
      cd ~/mmaction2
      ./tools/dist_train.sh configs/localization/bmn/bmn_acitivitynet_feature_vclr.py 2 \
        --work-dir work_dirs/vclr/bmn_activitynet --validate --seed 0 --deterministic --bmn
    • test
      python tools/test.py configs/localization/bmn/bmn_acitivitynet_feature_vclr.py \
        work_dirs/vclr/bmn_activitynet/latest.pth \
        --bmn --eval AR@AN --out result.json
  • Results

    Arch Dataset Finetuned model AUC AR@100
    BMN ActivityNet Download link 65.5 73.8

Feature visualization

We provide our feature visualization code at here.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

More Repositories

1

mm-cot

Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)
Python
3,727
star
2

chronos-forecasting

Chronos: Pretrained (Language) Models for Probabilistic Time Series Forecasting
Python
2,202
star
3

auto-cot

Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)
Jupyter Notebook
1,218
star
4

patchcore-inspection

Python
479
star
5

siam-mot

SiamMOT: Siamese Multi-Object Tracking
Python
458
star
6

alexa-teacher-models

Python
362
star
7

bigdetection

BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training
Python
352
star
8

earth-forecasting-transformer

Official implementation of Earthformer
Jupyter Notebook
337
star
9

sccl

Pytorch implementation of Supporting Clustering with Contrastive Learning, NAACL 2021
Python
262
star
10

prompt-pretraining

Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"
Python
250
star
11

RefChecker

RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
Python
235
star
12

esci-data

Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search
Python
154
star
13

tgl

Python
143
star
14

gan-control

This package provides a pythorch implementation of "GAN-Control: Explicitly Controllable GANs", ICCV 2021.
Jupyter Notebook
122
star
15

polygon-transformer

Python
120
star
16

ReFinED

ReFinED is an efficient and accurate entity linking (EL) system.
Python
116
star
17

tanl

Structured Prediction as Translation between Augmented Natural Languages
Python
113
star
18

unconditional-time-series-diffusion

Official PyTorch implementation of TSDiff models presented in the NeurIPS 2023 paper "Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting"
Python
112
star
19

crossnorm-selfnorm

CrossNorm and SelfNorm for Generalization under Distribution Shifts, ICCV 2021
Python
111
star
20

cceval

CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)
Python
109
star
21

wqa_tanda

This repo provides code and data used in our TANDA paper.
106
star
22

spot-diff

Project for <SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation> (ECCV 2022)
Python
101
star
23

mintaka

Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)
Python
101
star
24

mix-generation

MixGen: A New Multi-Modal Data Augmentation
Python
100
star
25

long-short-term-transformer

[NeurIPS 2021 Spotlight] Official implementation of Long Short-Term Transformer for Online Action Detection
Python
100
star
26

alexa-arena

Python
99
star
27

fraud-dataset-benchmark

Repository for Fraud Dataset Benchmark
Jupyter Notebook
96
star
28

glass-text-spotting

Official implementation for "GLASS: Global to Local Attention for Scene-Text Spotting" (ECCV'22)
Python
94
star
29

meta-q-learning

Code for the paper "Meta-Q-Learning"( ICLR 2020)
Python
92
star
30

exponential-moving-average-normalization

PyTorch implementation of EMAN for self-supervised and semi-supervised learning: https://arxiv.org/abs/2101.08482
Python
91
star
31

co-with-gnns-example

HTML
88
star
32

datatuner

Code related to "Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity" paper
Python
87
star
33

mxeval

Python
84
star
34

sentence-representations

Python
77
star
35

CodeSage

CodeSage: Code Representation Learning At Scale (ICLR 2024)
Python
75
star
36

semimtr-text-recognition

Multimodal Semi-Supervised Learning for Text Recognition (SemiMTR)
Python
75
star
37

fact-check-summarization

Python
72
star
38

instruct-video-to-video

Python
69
star
39

tabsyn

Official Implementations of "Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space""
Python
68
star
40

object-centric-learning-framework

Python
67
star
41

omni-detr

PyTorch implementation of Omni-DETR for omni-supervised object detection: https://arxiv.org/abs/2203.16089
Python
64
star
42

progressive-coordinate-transforms

Progressive Coordinate Transforms for Monocular 3D Object Detection, NeurIPS 2021
Python
63
star
43

FeatGraph

Python
62
star
44

small-baseline-camera-tracking

A dataset to facilitate the research of Structure-from-Motion (SfM) for movie and TV shows.
61
star
45

tubelet-transformer

This is an official implementation of TubeR: Tubelet Transformer for Video Action Detection
Python
59
star
46

embert

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.
Python
52
star
47

RAGChecker

RAGChecker: A Fine-grained Framework For Diagnosing RAG
Python
52
star
48

probconserv

Datasets and code for results presented in the ProbConserv paper
Python
50
star
49

semi-vit

PyTorch implementation of Semi-supervised Vision Transformers
Python
48
star
50

qa-dataset-converter

Code from the paper "What do Models Learn from Question Answering Datasets?" (EMNLP 2020)
Python
48
star
51

masked-diffusion-lm

Official implementation for the paper "A Cheaper and Better Diffusion Language Model with Soft-Masked Noise"
Python
48
star
52

transformer-gan

Python
47
star
53

transformers-data-augmentation

Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper
Python
46
star
54

gluonmm

A library of transformer models for computer vision and multi-modality research
Python
46
star
55

crossmodal-contrastive-learning

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021
Python
45
star
56

recode

Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"
Python
44
star
57

tracking-dataset

Python
44
star
58

dstc11-track2-intent-induction

DSTC 11 Track 2: Intent Induction from Conversations for Task-Oriented Dialogue
Python
43
star
59

dse

Python
43
star
60

dq-bart

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)
Python
43
star
61

gnn-tail-generalization

Python
43
star
62

auto-rag-eval

Code repo for the ICML 2024 paper "Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam Generation"
Python
42
star
63

boon

Datasets and code for results presented in the BOON paper
Jupyter Notebook
41
star
64

proteno

This repository contains data used in the NAACL 2021 Paper - Proteno: Text Normalization with Limited Data for Fast Deployment in Text to Speech Systems (https://arxiv.org/abs/2104.07777)
40
star
65

fact-graph

Implementation of the paper "FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations (NAACL 2022)"
Python
39
star
66

c2f-seg

Official Implementation for ICCV'23 paper Coarse-to-Fine Amodal Segmentation with Shape Prior (C2F-Seg).
Python
38
star
67

amazon-multilingual-counterfactual-dataset

37
star
68

QA-ViT

Python
37
star
69

indoor-scene-generation-eai

Jupyter Notebook
36
star
70

long-tailed-ood-detection

Official implementation for "Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition" (ICML'22 Long Presentation)
Python
36
star
71

efficient-longdoc-classification

Python
35
star
72

object-centric-multiple-object-tracking

Python
34
star
73

hyperbolic-embeddings

Code for hyperboloid embeddings for knowledge graph entities
Python
33
star
74

domain-knowledge-injection

Python
33
star
75

azcausal

Causal Inference in Python
Python
32
star
76

Repoformer

Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)
Python
32
star
77

ContraCLM

[ACL 2023] Code for ContraCLM: Contrastive Learning For Causal Language Model
Python
31
star
78

unified-ept

A Unified Efficient Pyramid Transformer for Semantic Segmentation, ICCVW 2021
Python
29
star
79

robust-tableqa

Two approaches for robust TableQA: 1) ITR is a general-purpose retrieval-based approach for handling long tables in TableQA transformer models. 2) LI-RAGE is a robust framework for open-domain TableQA which addresses several limitations. (ACL 2023)
Python
29
star
80

bold

Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper
27
star
81

replay-based-recurrent-rl

Code for "Task-Agnostic Continual RL: In Praise of a Simple Baseline"
Python
26
star
82

controlling-llm-memorization

Python
25
star
83

carbon-assessment-with-ml

CaML: Carbon Footprinting of Household Products with Zero-Shot Semantic Text Similarity
Jupyter Notebook
25
star
84

peft-design-spaces

Official implementation for "Parameter-Efficient Fine-Tuning Design Spaces"
Python
24
star
85

llm-interpret

Code for the ACL 2023 paper: "Rethinking the Role of Scale for In-Context Learning: An Interpretability-based Case Study at 66 Billion Scale"
Python
24
star
86

creating-and-correcting-novel-ml-model-errors

Jupyter Notebook
24
star
87

BartGraphSumm

Implementation of the paper "Efficiently Summarizing Text and Graph Encodings of Multi-Document Clusters (NAACL 2021)"
Python
23
star
88

tofueval

23
star
89

wqa-cascade-transformers

21
star
90

textadain-robust-recognition

TextAdaIN: Paying Attention to Shortcut Learning in Text Recognizers
Python
21
star
91

multiatis

Data and code for the paper "End-to-End Slot Alignment and Recognition for Cross-Lingual NLU" (Accepted to EMNLP 2020)
Python
20
star
92

iwslt-autodub-task

Python
20
star
93

street-reasoning

STREET: a multi-task and multi-step reasoning dataset
Python
19
star
94

contrastive-controlled-mt

Code and data for the IWSLT 2022 shared task on Formality Control for SLT
Ruby
19
star
95

pizza-semantic-parsing-dataset

The PIZZA dataset continues the exploration of task-oriented parsing by introducing a new dataset for parsing pizza and drink orders, whose semantics cannot be captured by flat slots and intents.
Python
19
star
96

redset

Redset is a dataset containing three months worth of user query metadata that ran on a selected sample of instances in the Amazon Redshift fleet. We provide query metadata for 200 provisioned and serverless instances each.
19
star
97

fast-rl-with-slow-updates

Jupyter Notebook
18
star
98

few-shot-baseline

Python
17
star
99

doc-mt-metrics

Python
17
star
100

normalizer-free-robust-training

Official implementation of "Removing Batch Normalization Boosts Adversarial Training" (ICML'22)
Python
17
star