• Stars
    star
    168
  • Rank 225,507 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created about 6 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering

TVQA

PyTorch code accompanies our EMNLP 2018 paper:

TVQA: Localized, Compositional Video Question Answering

Jie Lei, Licheng Yu, Mohit Bansal, Tamara L. Berg

Updates 2022-10-24: Our original web server is down due to a hardware failure, please access data, website, and submission/leaderboard from this new link.

Resources

Dataset

TVQA is a large-scale video QA dataset based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle). It consists of 152.5K QA pairs from 21.8K video clips, spanning over 460 hours of video. The questions are designed to be compositional, requiring systems to jointly localize relevant moments within a clip, comprehend subtitles-based dialogue, and recognize relevant visual concepts. Download TVQA data from ./data.

  • QA example

    qa example

    See examples in video: click here

  • Statistics

    TV Show Genre #Season #Episode #Clip #QA
    The Big Bang Theory sitcom 10 220 4,198 29,384
    Friends sitcom 10 226 5,337 37,357
    How I Met Your Mother sitcom 5 72 1,512 10,584
    Grey's Anatomy medical 3 58 1,472 9,989
    House M.D. medical 8 176 4,621 32,345
    Castle crime 8 173 4,698 32,886
    Total - 44 925 21,793 152,545

Model Overview

A multi-stream model, each stream process different contextual inputs. model figure

Requirements:

  • Python 2.7
  • PyTorch 0.4.0
  • tensorboardX
  • pysrt
  • tqdm
  • h5py
  • numpy

Video Features

  • ImageNet feature: Extracted from ResNet101, Google Drive link
  • Regional Visual Feature: object-level encodings from object detector (too large to share ...)
  • Visual Concepts Feature: object labels and attributes from object detector download link.

For object detector, we used Faster R-CNN trained on Visual Genome, please refer to this repo.

Usage

  1. Clone this repo

    git clone https://github.com/jayleicn/TVQA.git
    
  2. Download data

    Questions, answers and subtitles, etc. can be directly downloaded by executing the following command:

    bash download.sh
    

    For video frames and video features, please visit TVQA Dwonload Page.

  3. Preprocess data

    python preprocessing.py
    

    This step will process subtitle files and tokenize all textual sentence.

  4. Build word vocabulary, extract relevant GloVe vectors

    For words that do not exist in GloVe, random vectors np.random.randn(self.embedding_dim) * 0.4 are used. 0.4 is the standard deviation of the GloVe vectors

    mkdir cache
    python tvqa_dataset.py
    
  5. Training

    python main.py --input_streams sub
    
  6. Inference

    python test.py --model_dir [results_dir] --mode valid
    

Results

Please note this is a better version of the original implementation we used for EMNLP paper. Bascially, I rewrote some of the data preprocessing code and updated the model to the latest version of PyTorch, etc. By using this code, you should be able to get slightly higher accuracy (~1%) than our paper.

Citation

@inproceedings{lei2018tvqa,
  title={TVQA: Localized, Compositional Video Question Answering},
  author={Lei, Jie and Yu, Licheng and Bansal, Mohit and Berg, Tamara L},
  booktitle={EMNLP},
  year={2018}
}

TODO

  1. Add data preprocessing scripts
  2. Add baseline scripts
  3. Add model and training scripts
  4. Add test scripts

Contact

Jie Lei, jielei [at] cs.unc.edu

More Repositories

1

animeGAN

A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.
Jupyter Notebook
1,277
star
2

ClipBERT

[CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks.
Python
700
star
3

scipy-lecture-notes-zh-CN

中文版scipy-lecture-notes. 网站下线, 以离线HTML的形式继续更新, 见release.
Python
409
star
4

moment_detr

[NeurIPS 2021] Moment-DETR code and QVHighlights dataset
Python
254
star
5

recurrent-transformer

[ACL 2020] PyTorch code for MART: Memory-Augmented Recurrent Transformer for Coherent Video Paragraph Captioning
Jupyter Notebook
167
star
6

TVRetrieval

[ECCV 2020] PyTorch code for XML on TVRetrieval dataset - TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval
Python
151
star
7

singularity

[ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"
Python
127
star
8

TVQAplus

[ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering
Python
122
star
9

TVCaption

[ECCV 2020] PyTorch code of MMT (a multimodal transformer captioning model) on TVCaption dataset
Python
86
star
10

VideoLanguageFuturePred

[EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction
Python
47
star
11

mTVRetrieval

[ACL 2021] mTVR: Multilingual Video Moment Retrieval
Python
26
star
12

classification-with-coarse-fine-labels

Code accompanying the paper Weakly Supervised Image Classification with Coarse and Fine Labels.
Lua
8
star
13

my-scripts

Collections of useful scripts for my daily usage
Python
1
star
14

pytorch-pretrained-BERT

A copy from https://github.com/huggingface/pytorch-pretrained-BERT
Jupyter Notebook
1
star