Image Classification with CIFAR-10 dataset
In this notebook, I am going to classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. Some more interesting datasets can be found here
Some of the code and description of this notebook is borrowed by this repo provided by Udacity's Deep Learning Nanodegree program. This notebook has been reproduced decorated with richer descriptions after completing the Udacity's project.
Contents
1. Get the Data
2. Understanding the dataset
3. Hands-on experience implementing normalize and one-hot encoding function
4. Tensorflow Basics
5. Model Architecture and construction (Using different types of APIs (tf.nn, tf.layers, tf.contrib))
6. Training the model (how to feed and evaluate Tensorflow graph?)
7. Prediction
please open up the jupyter notebook to see the full descriptions
2. Understanding the dataset
The original a batch data is (10000 x 3072) dimensional tensor expressed in numpy array, where the number of columns, (10000), indicates the number of sample data. As stated in the CIFAR-10/CIFAR-100 dataset, the row vector, (3072) represents an color image of 32x32 pixels.
Since this project is going to use CNN for the classification tasks, the row vector, (3072), is not an appropriate form of image data to feed. In order to feed an image data into a CNN model, the dimension of the tensor representing an image data should be either (width x height x num_channel) or (num_channel x width x height).
It depends on your choice (check out the tensorflow conv2d). In this particular project, I am going to use the dimension of the first choice because the default choice in tensorflow's CNN operation is so.
The row vector (3072) has the exact same number of elements if you calculate 32*32*3==3072. In order to reshape the row vector, (3072), there are two steps required. The first step is involved with using reshape function in numpy, and the second step is involved with using transpose function in numpy as well.
5. Model Architecture
The entire model consists of 14 layers in total. In addition to layers below lists what techniques are applied to build the model.
- Convolution with 64 different filters in size of (3x3)
- Max Pooling by 2
- ReLU activation function
- Batch Normalization
- Convolution with 128 different filters in size of (3x3)
- Max Pooling by 2
- ReLU activation function
- Batch Normalization
- Convolution with 256 different filters in size of (3x3)
- Max Pooling by 2
- ReLU activation function
- Batch Normalization
- Convolution with 512 different filters in size of (3x3)
- Max Pooling by 2
- ReLU activation function
- Batch Normalization
- Flattening the 3-D output of the last convolutional operations.
- Fully Connected Layer with 128 units
- Dropout
- Batch Normalization
- Fully Connected Layer with 256 units
- Dropout
- Batch Normalization
- Fully Connected Layer with 512 units
- Dropout
- Batch Normalization
- Fully Connected Layer with 1024 units
- Dropout
- Batch Normalization
- Fully Connected Layer with 10 units (number of image classes)
the image below decribes how the conceptual convolving operation differs from the tensorflow implementation when you use [Channel x Width x Height] tensor format.
6. Training the model
achieving over 75% accuracy in 10 epochs through 5 batches.