• Stars
    star
    103
  • Rank 333,046 (Top 7 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created almost 7 years ago
  • Updated over 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Full train/inference/submission pipeline adapted to the competition from https://github.com/matterport/Mask_RCNN

kaggle-ds-bowl-2018-baseline

Full train/inference/submission pipeline adapted to the Data Science Bowl competition from https://github.com/matterport/Mask_RCNN. Kudos to @matterport, @waleedka and others for the code. It is well written, but is also somewhat opinionated, which makes it harder to guess what's going on under the hood, which is the reason for my fork to exist.

I did almost no changes to the original code, except for:

  • Everything custom in bowl_config.py and bowl_dataset.py.
  • VALIDATION_STEPS and STEPS_PER_EPOCH are now forced to depend on the dataset size, hardcoded.
  • multiprocessing=False, hardcoded.
  • @John1231983's changes from this PR.
  • Added RESNET_ARCHITECTURE variable to the config (resnet50 or resnet101 while 101 comes with a default config).

Quick Start

  1. First, you have to download the train masks. Thanks @lopuhin for bringing all the fixes to one place. You might want to do it outside of this repo to be able to pull changes later and use symlinks:
git clone https://github.com/lopuhin/kaggle-dsbowl-2018-dataset-fixes ../kaggle-dsbowl-2018-dataset-fixes
ln -s ../kaggle-dsbowl-2018-dataset-fixes/stage1_train stage1_train
  1. Download the rest of the official dataset and unzip it to the repo:
unzip ~/Downloads/stage1_test.zip -d stage1_test
unzip ~/Downloads/stage1_train_labels.csv.zip -d .
unzip ~/Downloads/stage1_sample_submission.csv.zip -d .
  1. Install pycocotools and COCO pretrained weights (mask_rcnn_coco.h5). General idea is described here. Keep in mind, to install pycocotools properly, it's better to run make install instead of make.

  2. For a single GPU training, run:

CUDA_VISIBLE_DEVICES="0" python train.py
  1. To generate a submission, run:
CUDA_VISIBLE_DEVICES="0" python inference.py

This will create submission.csv in the repo and overwrite the old one (you're welcome to fix this with a PR).

  1. Submit! You should get around 0.361 score on LB after 100 epochs.

What's else inside?

TODO

  • Fix validation. For now, train data is used as a validation set.
  • Normalize data.
  • Move configuration to argsparse for easier hyperparameter search.
  • Parallelize data loading.
  • Augmentations.
  • External Data.