Handwritten Character Recognition with Very Small Datasets (TextCaps)
This repository contains the code for TextCaps introduced in the following paper TextCaps : Handwritten Character Recognition with Very Small Datasets (WACV 2019).
Authors
Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Jathushan Rajasegaran, Suranga Seneviratne, Ranga Rodrigo
Citation
If you find TextCaps useful in your research, please consider citing:
@inproceedings{jayasundara2019textcaps,
title={TextCaps: Handwritten Character Recognition With Very Small Datasets},
author={Jayasundara, Vinoj and Jayasekara, Sandaru and Jayasekara, Hirunima and Rajasegaran, Jathushan and Seneviratne, Suranga and Rodrigo, Ranga},
booktitle={2019 IEEE Winter Conference on Applications of Computer Vision (WACV)},
pages={254--262},
year={2019},
month={Jan},
organization={IEEE}
}
Contents
Introduction
Many localized languages struggle to reap the benefits of recent advancements in character recognition systems due to the lack of substantial amount of labeled training data. This is due to the difficulty in generating large amounts of labeled data for such languages and inability of deep learning techniques to properly learn from small number of training samples. We solve this problem by introducing a technique of generating new training samples from the existing samples, with realistic augmentations which reflect actual variations that are present in human hand writing, by adding random controlled noise to their corresponding instantiation parameters. Our results with a mere 200 training samples per class surpass existing character recognition results in the EMNIST-letter dataset while achieving the existing results in the three datasets: EMNIST-balanced, EMNIST-digits, and MNIST. Our system is useful in character recognition for localized languages that lack much labeled training data and even in other related more general contexts such as object recognition.
Our system comprises five steps as follows:
(a) Initial Training of the CapsNet model M1
(b) Generating instantiation parameters and reconstructed images
(c) Applying the decoder re-training technique
(d) New image data generation technique
(e) Training the CapsNet model M2 afresh with the new dataset
Figure 1: The overall methodology of the TextCaps system
Usage
-
Install requirements.txt and required dependencies like cuDNN.
pip install -r requirements.txt
-
Clone this repo:
git clone https://github.com/vinoj/TextCaps.git
-
Download and extract the dataset.
-
The following command trains the fresh CapsNet M1 as illustrated in step (a):
python textcaps_emnist_bal.py --cnt 200
The cnt
parameter specifies the number of training samples to be used. Any other custom dataset can also be used.
- The following command generates new images as illustrated in step (b)-(d):
python textcaps_emnist_bal.py -dg --save_dir emnist_bal_200/ -w emnist_bal_200/trained_model.h5 --samples_to_generate 10
Any arbitrary number of new data samples can be generated by specifying the samples_to_generate
parameter.
Results on MNIST, EMNIST and F-MNIST
The table below shows the results of TextCaps on MNIST, EMNIST and F-MNIST datasets. We include the results that we obtained with the full training sets, as well as using only 200 training samples per class. In both instances, we have used the full testing sets.
Dataset | With full train set | With 200 samp/class |
---|---|---|
EMNIST-Letters | 95.36 ± 0.30% | 92.79 ± 0.30% |
EMNIST-Balanced | 90.46 ± 0.22% | 87.82 ± 0.25% |
EMNIST-Digits | 99.79 ± 0.11% | 98.96 ± 0.22% |
MNIST | 99.71 ± 0.18% | 98.68 ± 0.30% |
Fashion MNIST | 93.71 ± 0.64% | 85.36 ± 0.79% |
Our system can generate an arbitrary number of distinct images, as illustrated below.
Figure 2: New data generated with the proposed system
The CapsNet performance drastically improves when training afresh with the newly generated dataset. The following figure illustrates the performaces of CapsNets trained with the original dataset, as well as the generated dataset with only 0.5% additional data, generated with our system.
Figure 3: Comparing the results with and without the proposed system
Loss Function Analysis
We investigate the effect on reconstruction based on the loss function used for reconstruction in a capsule network, in order to identify a well-suited reconstruction loss function for the TextCaps model.
Figure 4: Change in PSNR for different reconstruction loss functions
Thus, for practical use, we suggest using L1 with DSSIM or BCE.
Some Following up Projects
- DeepCaps: Going Deeper with Capsule Networks
- TreeCaps: Tree-Structured Capsule Networks for Program Source Code Processing
We credit
We have used this as the base CapsNet implementation. We thank and credit the contributors of this repository.
Contact
[email protected]
Discussions, suggestions and questions are welcome!