• Stars
    star
    442
  • Rank 98,677 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created about 7 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for the 1st place model in Carvana Image Masking Challenge

Kaggle Carvana Image Masking Challenge

carvana header image

Code for the 1st place solution in Carvana Image Masking Challenge on car segmentaion.

We used CNNs to segment a car in the image. To achieve best results we use an ensemble of several differnet networks (Linknet, Unet-like CNN with custom encoder, several types of Unet-like CNNs with VGG11 encoder).

Our team:

Blogpost explaining the solution: https://medium.com/kaggle-blog/carvana-image-masking-challenge-1st-place-winners-interview-78fcc5c887a8

Requirements

To train final models you will need the following:

  • OS: Ubuntu 16.04
  • Required hardware:
    • Any decent modern computer with x86-64 CPU,
    • 32 GB RAM
    • Powerful GPU: Nvidia Titan X (12Gb VRAM) or Nvidia GeForce GTX 1080 Ti. The more the better.

Main software for training neural networks

  • Cuda 8.0
  • Python 2.7 and Python 3.5
  • Pytorch 0.2.0

Install

  1. Install required OS and Python
  2. Install packages with pip install -r requirements.txt
  3. Set your paths in congif/config.json :
  • input_data_dir: path to the folder with input images (train_hq, test_hq), masks (train_masks) and sample_submission.csv
  • submissions_dir: path to the folder which will be used to store predicted probability maps and submission files
  • models_dir: path to the dir which will be used to store model snapshots. You should put downloaded model weights in this folder.

Train all and predict all

If you want to train all the models and generate predicts:

  • Run bash train_and_predict.sh

Train models

We have several separate neural networks in our solution which we then combine in a final ensemble.
To train all the necessary networks:

  • Run bash train.sh

After training finishes trained weights are saved in model_dir directory and can be used by prediction scripts. Or you can directly use downloaded weights and skip the training procedure.

Required time: It may require quite a long time depending on hardware used. Takes about 30-60 min per epoch depending on the network on a single Titan X Pascal GPU. Total time needed is about 2140 hours, which is ~90 days on a single Titan X Pascal. The required time can be reduced if you use more GPUs in parallel.

Predict

  • Run bash predict.sh

It may take considerable amount of time to generate all predictions as there are a lot of data in test and we need to generate prediction for every single model and then average them. Some of the models use test time augmentation for the best model performance. Each single model takes about 5 hours to predict on all test images on a single Titan X GPU.

When all predictions are done they will be merged in a single file for submit.
File ens_scratch2(1)_v1-final(1)_al27(1)_te27(1).csv.gz that contains final predicted masks for all tst images will be saved in submisions_dir.

Required time: It may require quite a long time depending on hardware used. Takes from 4 to 8 hours per model to generate predictions on a single Titan X Pascal GPU. Total time needed is about 320 hours, which is ~13 days on a single Titan X Pascal. The required time can be reduced if you use more GPUs in parallel.

Remarks

Please, keep in mind that this isn't a production ready code but a very specific solution for the particular competition created in short time frame and with a lot of other constrains (limited training data, scarce computing resources and a small number of attempts to check for improvements).

Also, inherent stochasticity of neural networks training on many different levels (random initialization of weights, random augmentations and so on) makes it impossible to reproduce exact submission from scratch.

More Repositories

1

deeppose_tf

DeepPose implementation on TensorFlow. Original Paper http://arxiv.org/abs/1312.4659
Python
143
star
2

kaggle-lyft-motion-prediction-av

The 3rd place solution for competition "Lyft Motion Prediction for Autonomous Vehicles" at Kaggle
Python
116
star
3

kaggle_sea_lions_counting

Solution of the Kaggle competition Steller Sea Lion Population Count (4th place)
Jupyter Notebook
44
star
4

cliquecnn

Code for our paper "CliqueCNN: Deep Unsupervised Exemplar Learning" https://arxiv.org/abs/1608.08792
CSS
23
star
5

googleart_scraper

Scrape images from googleart
Python
20
star
6

deep_unsupervised_posets

Deep Unsupervised Similarity Learning using Partially Ordered Sets (CVPR17)
Python
20
star
7

deep_clustering

Implementation of [Deep Clustering for Unsupervised Learning of Visual Features]
Python
19
star
8

Exemplar_CNN

Unofficial fork of the Code used in the paper "Discriminative Unsupervised Feature Learning with Convolutional Neural Networks", NIPS 2014
MATLAB
16
star
9

densepose-evolution

Transferring Dense Pose to Proximal Animal Classes, CVPR2020
9
star
10

Multicore-TSNE

Parallel t-SNE implementation with Python and Torch wrappers. This fork has an option to choose the metric.
C++
5
star
11

web-crawler

Web-Crawler for simple.wikipedia.org on C++
C++
4
star
12

artprice_scrapper

Small application which uses Selenium and BeautifulSoup to scrape the https://artprice.com website to collect art auction data into a structured format.
Python
3
star
13

discovering-3d-obj-rel

Discovering Relationships between Object Categories via Universal Canonical Maps (CVPR2021)
3
star
14

awesome-ai-papers

Curated list of awesome the AI papers and brief notes on them
2
star
15

hci_similarities

MATLAB
2
star
16

bilinear-cnn

bilinear-cnn VGG16
Python
1
star
17

tjprj

Python
1
star
18

ai_cup_2015_code_race

Python
1
star
19

asanakoy.github.io

HTML
1
star
20

blog

my blog
SCSS
1
star
21

kaggle_amazon

Kaggle Amazon satellite images competition
Python
1
star