Path: blob/main/C2 - Advanced Learning Algorithms/week2/C2W2A1/C2_W2_Assignment.ipynb
3520 views
Practice Lab: Neural Networks for Handwritten Digit Recognition, Multiclass
In this exercise, you will use a neural network to recognize the hand-written digits 0-9.
Outline
1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.
numpy is the fundamental package for scientific computing with Python.
matplotlib is a popular library to plot graphs in Python.
tensorflow a popular platform for machine learning.

3 - Softmax Function
A multiclass neural network generates N outputs. One output is selected as the predicted answer. In the output layer, a vector is generated by a linear function which is fed into a softmax function. The softmax function converts into a probability distribution as described below. After applying softmax, each output will be between 0 and 1 and the outputs will sum to 1. They can be interpreted as probabilities. The larger inputs to the softmax will correspond to larger output probabilities.
The softmax function can be written:
Where and N is the number of feature/categories in the output layer.
my_softmax(z): [0.03 0.09 0.24 0.64]
tensorflow softmax(z): [0.03 0.09 0.24 0.64]
All tests passed.
Below, vary the values of the z
inputs. Note in particular how the exponential in the numerator magnifies small differences in the values. Note as well that the output values sum to one.
4 - Neural Networks
In last weeks assignment, you implemented a neural network to do binary classification. This week you will extend that to multiclass classification. This will utilize the softmax activation.
4.1 Problem Statement
In this exercise, you will use a neural network to recognize ten handwritten digits, 0-9. This is a multiclass classification task where one of n choices is selected. Automated handwritten digit recognition is widely used today - from recognizing zip codes (postal codes) on mail envelopes to recognizing amounts written on bank checks.
4.2 Dataset
You will start by loading the dataset for this task.
The
load_data()
function shown below loads the data into variablesX
andy
The data set contains 5000 training examples of handwritten digits .
Each training example is a 20-pixel x 20-pixel grayscale image of the digit.
Each pixel is represented by a floating-point number indicating the grayscale intensity at that location.
The 20 by 20 grid of pixels is “unrolled” into a 400-dimensional vector.
Each training examples becomes a single row in our data matrix
X
.This gives us a 5000 x 400 matrix
X
where every row is a training example of a handwritten digit image.
The second part of the training set is a 5000 x 1 dimensional vector
y
that contains labels for the training sety = 0
if the image is of the digit0
,y = 4
if the image is of the digit4
and so on.
This is a subset of the MNIST handwritten digit dataset (http://yann.lecun.com/exdb/mnist/)
4.2.1 View the variables
Let's get more familiar with your dataset.
A good place to start is to print out each variable and see what it contains.
The code below prints the first element in the variables X
and y
.
4.2.2 Check the dimensions of your variables
Another way to get familiar with your data is to view its dimensions. Please print the shape of X
and y
and see how many training examples you have in your dataset.
4.2.3 Visualizing the Data
You will begin by visualizing a subset of the training set.
In the cell below, the code randomly selects 64 rows from
X
, maps each row back to a 20 pixel by 20 pixel grayscale image and displays the images together.The label for each image is displayed above the image
4.3 Model representation
The neural network you will use in this assignment is shown in the figure below.
This has two dense layers with ReLU activations followed by an output layer with a linear activation.
Recall that our inputs are pixel values of digit images.
Since the images are of size , this gives us inputs
The parameters have dimensions that are sized for a neural network with units in layer 1, units in layer 2 and output units in layer 3, one for each digit.
Recall that the dimensions of these parameters is determined as follows:
If network has units in a layer and units in the next layer, then
will be of dimension .
will be a vector with elements
Therefore, the shapes of
W
, andb
, arelayer1: The shape of
W1
is (400, 25) and the shape ofb1
is (25,)layer2: The shape of
W2
is (25, 15) and the shape ofb2
is: (15,)layer3: The shape of
W3
is (15, 10) and the shape ofb3
is: (10,)
Note: The bias vector
b
could be represented as a 1-D (n,) or 2-D (n,1) array. Tensorflow utilizes a 1-D representation and this lab will maintain that convention:
Tensorflow models are built layer by layer. A layer's input dimensions ( above) are calculated for you. You specify a layer's output dimensions and this determines the next layer's input dimension. The input dimension of the first layer is derived from the size of the input data specified in the model.fit
statement below.
Note: It is also possible to add an input layer that specifies the input dimension of the first layer. For example:
tf.keras.Input(shape=(400,)), #specify input shape
We will include that here to illuminate some model sizing.
4.5 Softmax placement
As described in the lecture and the optional softmax lab, numerical stability is improved if the softmax is grouped with the loss function rather than the output layer during training. This has implications when building the model and using the model. Building:
The final Dense layer should use a 'linear' activation. This is effectively no activation.
The
model.compile
statement will indicate this by includingfrom_logits=True
.loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
This does not impact the form of the target. In the case of SparseCategorialCrossentropy, the target is the expected digit, 0-9.
Using the model:
The outputs are not probabilities. If output probabilities are desired, apply a softmax function.
Exercise 2
Below, using Keras Sequential model and Dense Layer with a ReLU activation to construct the three layer network described above.
All tests passed!
The parameter counts shown in the summary correspond to the number of elements in the weight and bias arrays as shown below.
Let's further examine the weights to verify that tensorflow produced the same dimensions as we calculated above.
Expected Output
The following code:
defines a loss function,
SparseCategoricalCrossentropy
and indicates the softmax should be included with the loss calculation by addingfrom_logits=True
)defines an optimizer. A popular choice is Adaptive Moment (Adam) which was described in lecture.
Epochs and batches
In the compile
statement above, the number of epochs
was set to 100. This specifies that the entire data set should be applied during training 100 times. During training, you see output describing the progress of training that looks like this:
The first line, Epoch 1/100
, describes which epoch the model is currently running. For efficiency, the training data set is broken into 'batches'. The default size of a batch in Tensorflow is 32. There are 5000 examples in our data set or roughly 157 batches. The notation on the 2nd line 157/157 [====
is describing which batch has been executed.
Loss (cost)
In course 1, we learned to track the progress of gradient descent by monitoring the cost. Ideally, the cost will decrease as the number of iterations of the algorithm increases. Tensorflow refers to the cost as loss
. Above, you saw the loss displayed each epoch as model.fit
was executing. The .fit method returns a variety of metrics including the loss. This is captured in the history
variable above. This can be used to examine the loss in a plot as shown below.
Prediction
To make a prediction, use Keras predict
. Below, X[1015] contains an image of a two.
The largest output is prediction[2], indicating the predicted digit is a '2'. If the problem only requires a selection, that is sufficient. Use NumPy argmax to select it. If the problem requires a probability, a softmax is required:
To return an integer representing the predicted target, you want the index of the largest probability. This is accomplished with the Numpy argmax function.
Let's compare the predictions vs the labels for a random sample of 64 digits. This takes a moment to run.
Let's look at some of the errors.
Note: increasing the number of training epochs can eliminate the errors on this data set.
Congratulations!
You have successfully built and utilized a neural network to do multiclass classification.