• Stars
    star
    276
  • Rank 149,319 (Top 3 %)
  • Language
    Python
  • Created over 7 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Active Deep Learning for Medical Imaging Segmentation

Active Deep Learning for Medical Imaging Segmentation

Marc Górriz  Axel Carlier Emmanuel Faure Xavier Giro-i-Nieto
Marc Górriz Axel Carlier Emmanuel Faure Xavier Giro-i-Nieto

A joint collaboration between:

logo-vortex logo-enseeiht logo-gpi
IRIT Vortex Group INP Toulouse - ENSEEIHT UPC Image Processing Group

Abstract

We propose a novel Active Learning framework capable to train effectively a convolutional neural network for semantic segmentation of medical imaging, with a limited amount of training labeled data. Our contribution is a practical Cost-Effective Active Learning approach using Dropout at test time as Monte Carlo sampling to model the pixel-wise uncertainty and to analyze the image information to improve the training performance.

Publication

ML4H: Machine Learning for Health Workshop at NIPS 2017, Long Beach, CA, USA, In Press. Find the pre-print version of our work on arXiv.

Image of the paper

Please cite with the following Bibtex code:

@article{DBLP:journals/corr/abs-1711-09168,
  author    = {Marc Gorriz and
               Axel Carlier and
               Emmanuel Faure and
               Xavier {Gir{\'{o}} i Nieto}},
  title     = {Cost-Effective Active Learning for Melanoma Segmentation},
  journal   = {CoRR},
  volume    = {abs/1711.09168},
  year      = {2017},
  url       = {http://arxiv.org/abs/1711.09168},
  archivePrefix = {arXiv},
  eprint    = {1711.09168},
  timestamp = {Mon, 04 Dec 2017 18:34:59 +0100},
  biburl    = {http://dblp.org/rec/bib/journals/corr/abs-1711-09168},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}

Slides

<iframe src="//www.slideshare.net/slideshow/embed_code/key/cadu74MspLHLW5" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe>

Cost-Effective Active Learning methodology

A Cost-Effective Active Learning (CEAL) algorithm is able to interactively query the human annotator or the own ConvNet model (automatic annotations from high confidence predictions) new labeled instances from a pool of unlabeled data. Candidates to be labeled are chosen by estimating their uncertainty based on the stability of the pixel-wise predictions when a dropout is applied on a deep neural network. We trained the U-Net architecture using the CEAL methodology for solving the melanoma segmentation problem, obtaining pretty good results considering the lack of labeled data.

architecture-fig

Datasets

As explained in our work, all the tests were done with the ISIC 2017 Challenge dataset for Skin Lesion Analysis towards melanoma detection, splitting the training set into labeled and unlabeled amount of data to simulate the Active Learning problem with large amounts of unlabeled data at the beginning.

Software frameworks: Keras

The model is implemented in Keras, which at its time is developed over TensorFlow.

pip install -r https://github.com/marc-gorriz/CEAL-Medical-Image-Segmentation/blob/master/requeriments.txt

Acknowledgements

We would like to especially thank Albert Gil Moreno from our technical support team at the Image Processing Group at the UPC.

AlbertGil-photo
Albert Gil
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeoForce GTX Titan X used in this work. logo-nvidia
The Image ProcessingGroup at the UPC is a SGR14 Consolidated Research Group recognized and sponsored by the Catalan Government (Generalitat de Catalunya) through its AGAUR office. logo-catalonia

Contact

If you have any general doubt about our work or code which may be of interest for other researchers, please use the public issues section on this github repo. Alternatively, drop us an e-mail at mailto:[email protected].