Transfer Learning of CNN Model Using Google Colab — part III

Nir Barazida
5 min readSep 1, 2020

--

In this blog post, I will explore how to perform transfer learning on a CNN image recognition (VGG-19) model using ‘Google Colab’. The model includes binary classification and multi-class classification of leaf images.

In parts I and II I’ve shown how to connect ‘Google Colab’ to drive, clone the GitHub repository to it and load the database to run time. We performed preprocessing actions on the database. create a model based on the VGG-19 model, trained it and also fine-tuned the wights. last, we saved the model, loaded it and preformed a prediction.

In this section, I will show how to create a multi-class classifier using
VGG-19 as a base model. I will base my explanations on the previous parts. If you missed them or can’t understand some of the steps in this part — please go back and read parts I and II.

4. Create a Multi-Class Model

We will first start by encoding the labels of the multi-class array

The 15 Label Encoder classes are: 
['Apple___healthy' 'Cherry_(including_sour)___Powdery_mildew' 'Cherry_(including_sour)___healthy' 'Corn_(maize)___Common_rust_' 'Corn_(maize)___Northern_Leaf_Blight' 'Corn_(maize)___healthy' 'Pepper,_bell___Bacterial_spot' 'Pepper,_bell___healthy' 'Tomato___Bacterial_spot' 'Tomato___Early_blight' 'Tomato___Leaf_Mold' 'Tomato___Septoria_leaf_spot' 'Tomato___Spider_mites Two-spotted_spider_mite' 'Tomato___Target_Spot' 'Tomato___healthy']
Sample of the Multi calss dataset labels: [ 3 3 3 ... 14 14 14]

Split the data to train, validation and test sets

The train categories distribution is:
{0.0: 1184, 1.0: 758, 2.0: 615, 3.0: 858, 4.0: 709, 5.0: 837, 6.0: 718, 7.0: 1064, 8.0: 1531, 9.0: 720, 10.0: 686, 11.0: 1275, 12.0: 1206, 13.0: 1011, 14.0: 1145}
The test categories distribution is:
{0.0: 165, 1.0: 105, 2.0: 85, 3.0: 119, 4.0: 99, 5.0: 116, 6.0: 100, 7.0: 148, 8.0: 213, 9.0: 100, 10.0: 95, 11.0: 177, 12.0: 168, 13.0: 140, 14.0: 159}
The validation categories distribution is:
{0.0: 296, 1.0: 189, 2.0: 154, 3.0: 215, 4.0: 177, 5.0: 209, 6.0: 179, 7.0: 266, 8.0: 383, 9.0: 180, 10.0: 171, 11.0: 319, 12.0: 302, 13.0: 253, 14.0: 287}

Transform the label array to a vector

Because this is a multi-class model the output will be a vector of N classes with the probability of the image to be of each class. Thus, the dimension of the label vector must be (sample size, number of class). To do so we will use One Hot Encoder (OHE). for more information about one hot encoding please address this link

train set:
((14317, 224, 224, 3), (14317, 15))
Validation set:
((3580, 224, 224, 3), (3580, 15))

test set:
((1989, 224, 224, 3), (1989, 15))

Load base model

Create a model

Model: "sequential" _________________________________________________________________ Layer (type)                 Output Shape              Param #    ================================================================= rescaling (Rescaling)        (None, 224, 224, 3)       0          _________________________________________________________________ vgg19 (Functional)           (None, 7, 7, 512)         20024384   _________________________________________________________________ flatten (Flatten)            (None, 25088)             0          ================================================================= Total params: 20,024,384 
Trainable params: 0
Non-trainable params: 20,024,384 _________________________________________________________________

Model output layer:

  • As mentioned above, this is a multi-class model and the output shall be a vector shape (1, number of classes) With the probability that the image will belong to each of the classes.
  • Thus, the output shape will be `len(le_multi.classes_)` and the activation function will be `softmax` ‘ which normalizes the output vector to the sum of 1.
Model input shape: 
(None, 224, 224, 3)
Model output shape:
(None, 15)

Model summary:
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling (Rescaling) (None, 224, 224, 3) 0 _________________________________________________________________ vgg19 (Functional) (None, 7, 7, 512) 20024384 _________________________________________________________________ flatten (Flatten) (None, 25088) 0 _________________________________________________________________ dense (Dense) (None, 256) 6422784 _________________________________________________________________ dropout (Dropout) (None, 256) 0 _________________________________________________________________ dense_1 (Dense) (None, 128) 32896 _________________________________________________________________ dropout_1 (Dropout) (None, 128) 0 _________________________________________________________________ dense_2 (Dense) (None, 15) 1935 ================================================================= Total params: 26,481,999
Trainable params: 6,457,615
Non-trainable params: 20,024,384 _________________________________________________________________

Train the model

We will change the loss function to `categorical_crossentropy` because we have a multi-categories model

Model Performance

32/32 [==============================] - 5s 143ms/step - 
Model accuracy on test set: 84.0%

Note:
The model accuracy reached 84% on the test set.
It’s a very low result and we can see that with more epochs the model would have reached better performance. For this blog post, it will be enough

Fine Tuning the model

The number of trainable variables in the model is:  38

Model Performance with Fine Tuning

32/32 [==============================] — 4s 135ms/step Model accuracy on test set: 98.7%

We can notice how important it is to fine-tune the model — with only 10 epochs we reached 98.7% accuracy on the test set.

Save the Label Encoder

Load the Model and Make a Prediction

Model prediction for the image is:  Tomato___Leaf_Mold

This is it for this blog post,
Hope that you had a good time reading it and got valuable insights out of it.
You are more than welcome to visit my
GitHub repository for more works and other help.

Would like to thank my Data Science mentors at Israel Tech Challenge (ITC) for sharing with me their endless knowledge in the world of Data Science and especially Morris Alper for advising in the writing of this blog post.

--

--

Nir Barazida

Data Scientist at DAGsHub. Leading the advocacy and outreach activity worldwide. We are building the next GitHub for Data Science.