• Stars
    star
    2,892
  • Rank 15,687 (Top 0.4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 7 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Implementation of Segnet, FCN, UNet , PSPNet and other models in Keras.

Image Segmentation Keras : Implementation of Segnet, FCN, UNet, PSPNet and other models in Keras.

PyPI version Downloads Build Status MIT license Twitter

Implementation of various Deep Image Segmentation models in keras.

News : Some functionality of this repository has been integrated with https://liner.ai . Check it out!!

Link to the full blog post with tutorial : https://divamgupta.com/image-segmentation/2019/06/06/deep-learning-semantic-segmentation-keras.html

Working Google Colab Examples:

Training using GUI interface

You can also train segmentation models on your computer with https://liner.ai

Train Inference / Export
https://liner.ai https://liner.ai
https://liner.ai https://liner.ai

Models

Following models are supported:

model_name Base Model Segmentation Model
fcn_8 Vanilla CNN FCN8
fcn_32 Vanilla CNN FCN8
fcn_8_vgg VGG 16 FCN8
fcn_32_vgg VGG 16 FCN32
fcn_8_resnet50 Resnet-50 FCN32
fcn_32_resnet50 Resnet-50 FCN32
fcn_8_mobilenet MobileNet FCN32
fcn_32_mobilenet MobileNet FCN32
pspnet Vanilla CNN PSPNet
pspnet_50 Vanilla CNN PSPNet
pspnet_101 Vanilla CNN PSPNet
vgg_pspnet VGG 16 PSPNet
resnet50_pspnet Resnet-50 PSPNet
unet_mini Vanilla Mini CNN U-Net
unet Vanilla CNN U-Net
vgg_unet VGG 16 U-Net
resnet50_unet Resnet-50 U-Net
mobilenet_unet MobileNet U-Net
segnet Vanilla CNN Segnet
vgg_segnet VGG 16 Segnet
resnet50_segnet Resnet-50 Segnet
mobilenet_segnet MobileNet Segnet

Example results for the pre-trained models provided :

Input Image Output Segmentation Image

How to cite

If you are using this library, please cite using:

@article{gupta2023image,
  title={Image segmentation keras: Implementation of segnet, fcn, unet, pspnet and other models in keras},
  author={Gupta, Divam},
  journal={arXiv preprint arXiv:2307.13215},
  year={2023}
}

Getting Started

Prerequisites

  • Keras ( recommended version : 2.4.3 )
  • OpenCV for Python
  • Tensorflow ( recommended version : 2.4.1 )
apt-get install -y libsm6 libxext6 libxrender-dev
pip install opencv-python

Installing

Install the module

Recommended way:

pip install --upgrade git+https://github.com/divamgupta/image-segmentation-keras

or

pip install keras-segmentation

or

git clone https://github.com/divamgupta/image-segmentation-keras
cd image-segmentation-keras
python setup.py install

Pre-trained models:

from keras_segmentation.pretrained import pspnet_50_ADE_20K , pspnet_101_cityscapes, pspnet_101_voc12

model = pspnet_50_ADE_20K() # load the pretrained model trained on ADE20k dataset

model = pspnet_101_cityscapes() # load the pretrained model trained on Cityscapes dataset

model = pspnet_101_voc12() # load the pretrained model trained on Pascal VOC 2012 dataset

# load any of the 3 pretrained models

out = model.predict_segmentation(
    inp="input_image.jpg",
    out_fname="out.png"
)

Preparing the data for training

You need to make two folders

  • Images Folder - For all the training images
  • Annotations Folder - For the corresponding ground truth segmentation images

The filenames of the annotation images should be same as the filenames of the RGB images.

The size of the annotation image for the corresponding RGB image should be same.

For each pixel in the RGB image, the class label of that pixel in the annotation image would be the value of the blue pixel.

Example code to generate annotation images :

import cv2
import numpy as np

ann_img = np.zeros((30,30,3)).astype('uint8')
ann_img[ 3 , 4 ] = 1 # this would set the label of pixel 3,4 as 1

cv2.imwrite( "ann_1.png" ,ann_img )

Only use bmp or png format for the annotation images.

Download the sample prepared dataset

Download and extract the following:

https://drive.google.com/file/d/0B0d9ZiqAgFkiOHR1NTJhWVJMNEU/view?usp=sharing

You will get a folder named dataset1/

Using the python module

You can import keras_segmentation in your python script and use the API

from keras_segmentation.models.unet import vgg_unet

model = vgg_unet(n_classes=51 ,  input_height=416, input_width=608  )

model.train(
    train_images =  "dataset1/images_prepped_train/",
    train_annotations = "dataset1/annotations_prepped_train/",
    checkpoints_path = "/tmp/vgg_unet_1" , epochs=5
)

out = model.predict_segmentation(
    inp="dataset1/images_prepped_test/0016E5_07965.png",
    out_fname="/tmp/out.png"
)

import matplotlib.pyplot as plt
plt.imshow(out)

# evaluating the model 
print(model.evaluate_segmentation( inp_images_dir="dataset1/images_prepped_test/"  , annotations_dir="dataset1/annotations_prepped_test/" ) )

Usage via command line

You can also use the tool just using command line

Visualizing the prepared data

You can also visualize your prepared annotations for verification of the prepared data.

python -m keras_segmentation verify_dataset \
 --images_path="dataset1/images_prepped_train/" \
 --segs_path="dataset1/annotations_prepped_train/"  \
 --n_classes=50
python -m keras_segmentation visualize_dataset \
 --images_path="dataset1/images_prepped_train/" \
 --segs_path="dataset1/annotations_prepped_train/"  \
 --n_classes=50

Training the Model

To train the model run the following command:

python -m keras_segmentation train \
 --checkpoints_path="path_to_checkpoints" \
 --train_images="dataset1/images_prepped_train/" \
 --train_annotations="dataset1/annotations_prepped_train/" \
 --val_images="dataset1/images_prepped_test/" \
 --val_annotations="dataset1/annotations_prepped_test/" \
 --n_classes=50 \
 --input_height=320 \
 --input_width=640 \
 --model_name="vgg_unet"

Choose model_name from the table above

Getting the predictions

To get the predictions of a trained model

python -m keras_segmentation predict \
 --checkpoints_path="path_to_checkpoints" \
 --input_path="dataset1/images_prepped_test/" \
 --output_path="path_to_predictions"

Video inference

To get predictions of a video

python -m keras_segmentation predict_video \
 --checkpoints_path="path_to_checkpoints" \
 --input="path_to_video" \
 --output_file="path_for_save_inferenced_video" \
 --display

If you want to make predictions on your webcam, don't use --input, or pass your device number: --input 0
--display opens a window with the predicted video. Remove this argument when using a headless system.

Model Evaluation

To get the IoU scores

python -m keras_segmentation evaluate_model \
 --checkpoints_path="path_to_checkpoints" \
 --images_path="dataset1/images_prepped_test/" \
 --segs_path="dataset1/annotations_prepped_test/"

Fine-tuning from existing segmentation model

The following example shows how to fine-tune a model with 10 classes .

from keras_segmentation.models.model_utils import transfer_weights
from keras_segmentation.pretrained import pspnet_50_ADE_20K
from keras_segmentation.models.pspnet import pspnet_50

pretrained_model = pspnet_50_ADE_20K()

new_model = pspnet_50( n_classes=51 )

transfer_weights( pretrained_model , new_model  ) # transfer weights from pre-trained model to your model

new_model.train(
    train_images =  "dataset1/images_prepped_train/",
    train_annotations = "dataset1/annotations_prepped_train/",
    checkpoints_path = "/tmp/vgg_unet_1" , epochs=5
)

Knowledge distillation for compressing the model

The following example shows transfer the knowledge from a larger ( and more accurate ) model to a smaller model. In most cases the smaller model trained via knowledge distilation is more accurate compared to the same model trained using vanilla supervised learning.

from keras_segmentation.predict import model_from_checkpoint_path
from keras_segmentation.models.unet import unet_mini
from keras_segmentation.model_compression import perform_distilation

model_large = model_from_checkpoint_path( "/checkpoints/path/of/trained/model" )
model_small = unet_mini( n_classes=51, input_height=300, input_width=400  )

perform_distilation ( data_path="/path/to/large_image_set/" , checkpoints_path="path/to/save/checkpoints" , 
    teacher_model=model_large ,  student_model=model_small  , distilation_loss='kl' , feats_distilation_loss='pa' )

Adding custom augmentation function to training

The following example shows how to define a custom augmentation function for training.

from keras_segmentation.models.unet import vgg_unet
from imgaug import augmenters as iaa

def custom_augmentation():
    return  iaa.Sequential(
        [
            # apply the following augmenters to most images
            iaa.Fliplr(0.5),  # horizontally flip 50% of all images
            iaa.Flipud(0.5), # horizontally flip 50% of all images
        ])

model = vgg_unet(n_classes=51 ,  input_height=416, input_width=608)

model.train(
    train_images =  "dataset1/images_prepped_train/",
    train_annotations = "dataset1/annotations_prepped_train/",
    checkpoints_path = "/tmp/vgg_unet_1" , epochs=5, 
    do_augment=True, # enable augmentation 
    custom_augmentation=custom_augmentation # sets the augmention function to use
)

Custom number of input channels

The following example shows how to set the number of input channels.

from keras_segmentation.models.unet import vgg_unet

model = vgg_unet(n_classes=51 ,  input_height=416, input_width=608, 
                 channels=1 # Sets the number of input channels
                 )

model.train(
    train_images =  "dataset1/images_prepped_train/",
    train_annotations = "dataset1/annotations_prepped_train/",
    checkpoints_path = "/tmp/vgg_unet_1" , epochs=5, 
    read_image_type=0  # Sets how opencv will read the images
                       # cv2.IMREAD_COLOR = 1 (rgb),
                       # cv2.IMREAD_GRAYSCALE = 0,
                       # cv2.IMREAD_UNCHANGED = -1 (4 channels like RGBA)
)

Custom preprocessing

The following example shows how to set a custom image preprocessing function.

from keras_segmentation.models.unet import vgg_unet

def image_preprocessing(image):
    return image + 1

model = vgg_unet(n_classes=51 ,  input_height=416, input_width=608)

model.train(
    train_images =  "dataset1/images_prepped_train/",
    train_annotations = "dataset1/annotations_prepped_train/",
    checkpoints_path = "/tmp/vgg_unet_1" , epochs=5,
    preprocessing=image_preprocessing # Sets the preprocessing function
)

Custom callbacks

The following example shows how to set custom callbacks for the model training.

from keras_segmentation.models.unet import vgg_unet
from keras.callbacks import ModelCheckpoint, EarlyStopping

model = vgg_unet(n_classes=51 ,  input_height=416, input_width=608 )

# When using custom callbacks, the default checkpoint saver is removed
callbacks = [
    ModelCheckpoint(
                filepath="checkpoints/" + model.name + ".{epoch:05d}",
                save_weights_only=True,
                verbose=True
            ),
    EarlyStopping()
]

model.train(
    train_images =  "dataset1/images_prepped_train/",
    train_annotations = "dataset1/annotations_prepped_train/",
    checkpoints_path = "/tmp/vgg_unet_1" , epochs=5,
    callbacks=callbacks
)

Multi input image input

The following example shows how to add additional image inputs for models.

from keras_segmentation.models.unet import vgg_unet

model = vgg_unet(n_classes=51 ,  input_height=416, input_width=608)

model.train(
    train_images =  "dataset1/images_prepped_train/",
    train_annotations = "dataset1/annotations_prepped_train/",
    checkpoints_path = "/tmp/vgg_unet_1" , epochs=5,
    other_inputs_paths=[
        "/path/to/other/directory"
    ],
    
    
#     Ability to add preprocessing
    preprocessing=[lambda x: x+1, lambda x: x+2, lambda x: x+3], # Different prepocessing for each input
#     OR
    preprocessing=lambda x: x+1, # Same preprocessing for each input
)

Projects using keras-segmentation

Here are a few projects which are using our library :

If you use our code in a publicly available project, please add the link here ( by posting an issue or creating a PR )

More Repositories

1

diffusionbee-stable-diffusion-ui

Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
JavaScript
12,342
star
2

stable-diffusion-tensorflow

Stable Diffusion in TensorFlow / Keras
Python
1,573
star
3

obsidian-spreadsheets

CSS
128
star
4

ladder_network_keras

Semi-Supervised Learning with Ladder Networks in Keras. Get 98% test accuracy on MNIST with just 100 labeled examples !
Python
101
star
5

deep-clustering-kingdra

Official implementation of ICLR 2020 paper Unsupervised Clustering using Pseudo-semi-supervised Learning
Python
48
star
6

lstm-gender-predictor

Predict the gender of a name using LSTM
Python
41
star
7

attention-translation-keras

Attention based sequence to sequence neural machine translation model built in keras.
Python
30
star
8

sbevnet-stereo-layout-estimation

This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider
Python
23
star
9

mtl_girnet

Code and datasets for our AAAI'19 paper : GIRNet: Interleaved Multi-Task Recurrent State Sequence Models
Python
6
star
10

datasets

5
star
11

pytorch-propane

Pytorch Propane is a simplified wrapper to make training and evaluation of neural networks easy and scalable.
Python
5
star
12

mttdsc

Code for our PAKDD'19 paper "Multi-task Learning for Target-dependent Sentiment Classification"
Python
5
star
13

PliceFS

Minimal file system implemented in C++.
C++
2
star
14

cppshell

Minimal shell implemented in C++
C++
2
star
15

divamgupta.github.io

HTML
2
star