Copyright Information
Laboratory 2: Computer Vision
Part 1: MNIST Digit Classification
In the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous MNIST dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.
First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
We'll also install Comet. If you followed the instructions from Lab 1, you should have your Comet account set up. Enter your API key below.
1.1 MNIST dataset
Let's download and load the dataset and display a few random samples from it:
The MNIST dataset object in PyTorch is not a simple tensor or array. It's an iterable dataset that loads samples (image-label pairs) one at a time or in batches. In a later section of this lab, we will define a handy DataLoader to process the data in batches.
Our training set is made up of 28x28 grayscale images of handwritten digits.
Let's visualize what some of these images and their corresponding training labels look like.
1.2 Neural Network for Handwritten Digit Classification
We'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:
Fully connected neural network architecture
To define the architecture of this first fully connected neural network, we'll once again use the the torch.nn
modules, defining the model using nn.Sequential
. Note how we first use a nn.Flatten
layer, which flattens the input so that it can be fed into the model.
In this next block, you'll define the fully connected layers of this simple network.
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.
Let's take a step back and think about the network we've just created. The first layer in this network, nn.Flatten
, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two nn.Linear
layers. These are fully-connected neural layers. The first nn.Linear
layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.
That defines our fully connected model!
Embracing subclassing in PyTorch
Recall that in Lab 1, we explored creating more flexible models by subclassing nn.Module
. This technique of defining models is more commonly used in PyTorch. We will practice using this approach of subclassing to define our models for the rest of the lab.
Model Metrics and Training Parameters
Before training the model, we need to define components that govern its performance and guide its learning process. These include the loss function, optimizer, and evaluation metrics:
Loss function — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.
Optimizer — This defines how the model is updated based on the data it sees and its loss function.
Metrics — Here we can define metrics that we want to use to monitor the training and testing steps. In this example, we'll define and take a look at the accuracy, the fraction of the images that are correctly classified.
We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the cross entropy loss.
You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
Train the model
We're now ready to train our model, which will involve feeding the training data (train_dataset
) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. This dataset consists of a (image, label) tuples that we will iteratively access in batches.
In Lab 1, we saw how we can use the .backward()
method to optimize losses and train models with stochastic gradient descent. In this section, we will define a function to train the model using .backward()
and optimizer.step()
to automatically update our model parameters (weights and biases) as we saw in Lab 1.
Recall, we mentioned in Section 1.1 that the MNIST dataset can be accessed iteratively in batches. Here, we will define a PyTorch DataLoader
that will enable us to do that.
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data.
Evaluate accuracy on the test dataset
Now that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, iterating over the testset_loader
allows us to access our test images and test labels. And to evaluate accuracy, we can check to see if the model's predictions match the labels from this loader.
Since we have now trained the mode, we will use the eval state of the model on the test dataset.
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of overfitting, when a machine learning model performs worse on new data than on its training data.
What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...
1.3 Convolutional Neural Network (CNN) for handwritten digit classification
As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:
Define the CNN model
We'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use nn.Conv2d
to define convolutional layers and nn.MaxPool2D
to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model. You can decide to use nn.Sequential
or to subclass nn.Module
based on your preference.
Train and test the CNN model
Earlier in the lab, we defined a train
function. The body of the function is quite useful because it allows us to have control over the training model, and to record differentiation operations during training by computing the gradients using loss.backward()
. You may recall seeing this in Lab 1 Part 1.
We'll use this same framework to train our cnn_model
using stochastic gradient descent. You are free to implement the following parts with or without the train and evaluate functions we defined above. What is most important is understanding how to manipulate the bodies of those functions to train and test models.
As we've done above, we can define the loss function, optimizer, and calculate the accuracy of the model. Define an optimizer and learning rate of choice. Feel free to modify as you see fit to optimize your model's performance.
Evaluate the CNN Model
Now that we've trained the model, let's evaluate it on the test dataset.
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model?
Feel free to click the Comet links to investigate the training/accuracy curves for your model.
Make predictions with the CNN model
With the model trained, we can use it to make predictions about some images.
With this function call, the model has predicted the label of the first image in the testing set. Let's take a look at the prediction:
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a distribution over the 10 digit classes. Thus, these numbers describe the model's predicted likelihood that the image corresponds to each of the 10 different digits.
Let's look at the digit that has the highest likelihood for the first image in the test dataset:
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits.
Recall that in PyTorch the MNIST dataset is typically accessed using a DataLoader to iterate through the test set in smaller, manageable batches. By appending the predictions, test labels, and test images from each batch, we will first gradually accumulate all the data needed for visualization into singular variables to observe our model's predictions.
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
1.5 Conclusion
In this part of the lab, you had the chance to play with different MNIST classifiers with different architectures (fully-connected layers only, CNN), and experiment with how different hyperparameters affect accuracy (learning rate, etc.). The next part of the lab explores another application of CNNs, facial detection, and some drawbacks of AI systems in real world applications, like issues of bias.