Copyright Information
Laboratory 2: Computer Vision
Part 1: MNIST Digit Classification
In the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous MNIST dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.
First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
We'll also install Comet. If you followed the instructions from Lab 1, you should have your Comet account set up. Enter your API key below.
1.1 MNIST dataset
Let's download and load the dataset and display a few random samples from it:
Our training set is made up of 28x28 grayscale images of handwritten digits.
Let's visualize what some of these images and their corresponding training labels look like.
1.2 Neural Network for Handwritten Digit Classification
We'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:
Fully connected neural network architecture
To define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the Sequential
class. Note how we first use a Flatten
layer, which flattens the input so that it can be fed into the model.
In this next block, you'll define the fully connected layers of this simple work.
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.
Let's take a step back and think about the network we've just created. The first layer in this network, tf.keras.layers.Flatten
, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two tf.keras.layers.Dense
layers. These are fully-connected neural layers. The first Dense
layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.
That defines our fully connected model!
Compile the model
Before training the model, we need to define a few more settings. These are added during the model's compile
step:
Loss function — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.
Optimizer — This defines how the model is updated based on the data it sees and its loss function.
Metrics — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the accuracy, the fraction of the images that are correctly classified.
We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the cross entropy loss.
You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
Train the model
We're now ready to train our model, which will involve feeding the training data (train_images
and train_labels
) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training.
In Lab 1, we saw how we can use GradientTape
to optimize losses and train models with stochastic gradient descent. After defining the model settings in the compile
step, we can also accomplish training by calling the fit
method on an instance of the Model
class. We will use this to train our fully connected model
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data.
Evaluate accuracy on the test dataset
Now that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the test_images
array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the test_labels
array.
Use the evaluate
method to evaluate the model on the test dataset!
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of overfitting, when a machine learning model performs worse on new data than on its training data.
What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...
1.3 Convolutional Neural Network (CNN) for handwritten digit classification
As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:
Define the CNN model
We'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use keras.layers.Conv2D
to define convolutional layers and keras.layers.MaxPool2D
to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
Train and test the CNN model
Now, as before, we can define the loss function, optimizer, and metrics through the compile
method. Compile the CNN model with an optimizer and learning rate of choice:
As was the case with the fully connected model, we can train our CNN using the fit
method via the Keras API.
Great! Now that we've trained the model, let's evaluate it on the test dataset using the evaluate
method:
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model?
Feel free to click the Comet links to investigate the training/accuracy curves for your model.
Make predictions with the CNN model
With the model trained, we can use it to make predictions about some images. The predict
function call generates the output predictions given a set of input samples.
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits.
Let's look at the digit that has the highest confidence for the first image in the test dataset:
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
1.4 Training the model 2.0
Earlier in the lab, we used the fit
function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts.
As an alternative to this, we can use the tf.GradientTape
class to record differentiation operations during training, and then call the tf.GradientTape.gradient
function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.
We'll use this framework to train our cnn_model
using stochastic gradient descent.
1.5 Conclusion
In this part of the lab, you had the chance to play with different MNIST classifiers with different architectures (fully-connected layers only, CNN), and experiment with how different hyperparameters affect accuracy (learning rate, etc.). The next part of the lab explores another application of CNNs, facial detection, and some drawbacks of AI systems in real world applications, like issues of bias.