• Stars
    star
    1,126
  • Rank 41,328 (Top 0.9 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created over 9 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Microsoft COCO Caption Evaluation

Evaluation codes for MS COCO caption generation.

Requirements

  • java 1.8.0
  • python 2.7

Files

./

  • cocoEvalCapDemo.py (demo script)

./annotation

  • captions_val2014.json (MS COCO 2014 caption validation set)
  • Visit MS COCO download page for more details.

./results

  • captions_val2014_fakecap_results.json (an example of fake results for running demo)
  • Visit MS COCO format page for more details.

./pycocoevalcap: The folder where all evaluation codes are stored.

  • evals.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
  • tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
  • bleu: Bleu evalutation codes
  • meteor: Meteor evaluation codes
  • rouge: Rouge-L evaluation codes
  • cider: CIDEr evaluation codes
  • spice: SPICE evaluation codes

Setup

  • You will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE. To do this, run: ./get_stanford_models.sh
  • Note: SPICE will try to create a cache of parsed sentences in ./pycocoevalcap/spice/cache/. This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./pycocoevalcap/spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.

References

Developers

  • Xinlei Chen (CMU)
  • Hao Fang (University of Washington)
  • Tsung-Yi Lin (Cornell)
  • Ramakrishna Vedantam (Virgina Tech)

Acknowledgement

  • David Chiang (University of Norte Dame)
  • Michael Denkowski (CMU)
  • Alexander Rush (Harvard University)