Path: blob/master/2020-fall/materials/tutorial_07/tutorial_07.ipynb
2051 views
Tutorial 7: Classification (Part II)
Lecture and Tutorial Learning Goals:
After completing this week's lecture and tutorial work, you will be able to:
Describe what a test data set is and how it is used in classification.
Using R, evaluate classification accuracy using a test data set and appropriate metrics.
Using R, execute cross validation in R to choose the number of neighbours.
Identify when it is necessary to scale variables before classification and do this using R
In a dataset with > 2 attributes, perform k-nearest neighbour classification in R using the
tidymodels
package to predict the class of a test dataset.Describe advantages and disadvantages of the k-nearest neighbour classification algorithm.
Handwritten Digit Classification using R
Source: https://media.giphy.com/media/UwrdbvJz1CNck/giphy.gif
MNIST is a computer vision dataset that consists of images of handwritten digits like these:
It also includes labels for each image, telling us which digit it is. For example, the labels for the above images are 5, 0, 4, and 1.
In this tutorial, we’re going to train a classifier to look at images and predict what digits they are. Our goal isn’t to train a really elaborate model that achieves state-of-the-art performance, but rather to dip a toe into using classification with pixelated images. As such, we’re going to keep working with the simple K-nearest neighbour classifier we have been exploring in the last two weeks.
Using image data for classification
As mentioned earlier, every MNIST data point has two parts: an image of a handwritten digit and a corresponding label. Both the training set and test set contain images and their corresponding labels.
Each image is 28 pixels by 28 pixels. We can interpret this as a big matrix of numbers:
We can flatten this matrix into a vector of 28x28 = 784 numbers and give it a class label (here 1 for the number one). It doesn’t matter how we flatten the array, as long as we’re consistent between images. From this perspective, the MNIST images are just a bunch of points in a 784-dimensional vector space, with a very rich structure.
We do this for every image of the digits we have, and we create a data table like the one shown below that we can use for classification. Note, like any other classification problem that we have seen before, we need many observations for each class. This problem is also a bit different from the first classification problem we have encountered (Wisonsin breast cancer data set), in that we have more than two classes (here we have 10 classes, one for each digit from 0 to 9).
This information is taken from: https://tensorflow.rstudio.com/tensorflow/articles/tutorial_mnist_beginners.html
Question 1.0 Multiple Choice:
{points: 1}
How many rows and columns does the array of an image have?
A. 784 columns and 1 row
B. 28 columns and 1 row
C. 18 columns and 18 rows
D. 28 columns and 28 rows
Assign your answer to an object called answer1.0
. Make sure the correct answer is an uppercase letter and to surround your answer with quotation marks (e.g. "F"
).
Question 1.1 Multiple Choice:
{points: 1}
Once we linearize the array, how many rows represent a number?
A. 28
B. 784
C. 1
D. 18
Assign your answer to an object called answer1.1
. Make sure the correct answer is an uppercase letter and to surround your answer with quotation marks (e.g. "F"
).
2. Exploring the Data
Before we move on to do the modeling component, it is always required that we take a look at our data and understand the problem and the structure of the data well. We can start this part by loading the images and taking a look at the first rows of the dataset. You can load the data set by running the cell below.
Look at the first 6 rows of training_data
. What do you notice?
There are no class labels! This data set has already been split into the X's (which you loaded above) and the labels. In addition, there is an extra "X" column which represents the row number (1, 2, 3...). Keep this in mind for now because we will remove it later on. Now, let's load the labels.
Look at the first 6 labels of training_labels
using the head()
function.
Question 2.0
{points: 1}
How many rows does the training data set have? Note, each row is a different number in the postal code system.
Use nrow()
. Note, the testing data set should have fewer rows than the training data set.
Assign your answer to an object called number_of_rows
. Make sure your answer is a numeric and so it should not be surrounded by quotation marks.
Question 2.1
{points: 1}
For mutli-class classification with k-nn it is important for the classes to have about the same number of observations in each class. For example, if 90% of our training set observationas were labeled as 2's, then k-nn classification predict 2 almost every time and we would get an accuracy score of 90% even though our classifier wasn't really doing a great job.
Use the group_by
and summarize
function to get the counts for each group and see if the data set is balanced across the classes (has roughly equal numbers of observation for each class). Name the output counts
. counts
should be a data frame with 2 columns, y
and n
(the column n
should have the counts for how many observations there were for each class group).
Question 2.2 True or False:
{points: 1}
The classes are not balanced. Some of them are many times larger or smaller than others.
Assign your answer to an object called answer2.2
. Make sure your answer is in lowercase and is surrounded by quotation marks (e.g. "true"
or "false"
)
To view an image in the notebook, you can use the show_digit
function (we gave you the code for this function in the first code cell in the notebook, All you have to do to use it is run the cell below). The show_digit
function takes the row from the dataset whose image you want to produce, which you can obtain using the slice
function.
The code we provide below will show you the image for the observation in the 200th row from the training data set.
Question 2.3
{points: 3}
Show the image for row 102.
If you are unsure as to what number the plot is depicting (because the handwriting is messy) you can use slice
to get the label from the training_labels
:
Question 2.4
{points: 1}
What is the class label for row 102?
Assign your answer to an object called label_102
.
3. Splitting the Data
Question 3.0
{points: 1}
Currently, the data set and labels are split. The tidymodels
package needs the data and labels to be married together. Use the bind_cols
function to marry the data sets to their respective labels. Call the training data with its respective labels as training_set
and the testing data with its respective labels as testing_set
. Even though the data set has been split for you already, remember you need to have a training and testing data set when designing a k-nn classification model.
Also, remember in Section 2 that we told you to keep something in mind? To remind you, there is an extra "X" column on the far left which represents the row numbers (1, 2, 3, etc.) in the training_set
. This column should not be used for training. Therefore, let's remove this column from the data set.
Hint: You can remove columns in a dataset using the select
function and by putting a negative sign infront of the column you want to exclude (e.g. -X
).
Question 3.1
{points: 1}
We have already split the data into two datasets, one for training purposes and one for testing purposes. Is it important to split the data into a training and testing dataset when designing a knn classification model?
A. Yes because we can't evaluate how good our model is performing with data that has already been used to train our model
B. Yes because it is important to have a 75/25 split of the data for knn classification models
C. No because knn classification models only require the training dataset
D. No because after splitting, we don't even standardize the testing dataset so there's no point in even having a testing set
Assign your answer to an objected called answer3.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Which should we use?
As you learned from the worksheet, we can use cross-validation on the training data set to select which is the most optimal for our data set for k-nn classification.
Question 3.2
{points: 1}
To get all the marks in this question, you will have to:
Create a recipe that uses all predictors and a model specification with tuning on the number of neighbours
Perform a 5-fold cross-validation on the training set
Create a workflow analysis with your recipe and model specification and specify that the tuning should try 10 values of
Collect the metrics from the workflow analysis
Plot the vs the accuracy
Assign this plot to an object called
cross_val_plot
Question 3.3 Multiple Choice:
{points: 1}
Based on the plot from Question 3.2, what is the best value of ?
A. 3
B. 5
C. 2
D. 7
Assign your answer to an object called answer3.3
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
4. Let's build our model
Question 4.0
{points: 1}
Now that we have explored our data, separated the data into training and testing sets (was technically done for you), and applied cross-validation to choose the best , we can build our final model.
First, build your model specification with the best value for . Assign your answer to an object called mnist_spec
.
Then, pass the model specification and the training data set to the fit()
function. Assign your answer to an object called mnist_fit
.
Question 4.1
{points: 1}
Use your final model to predict on the test dataset and report the accuracy of this prediction.
Question 4.2
{points: 3}
Print out 3 images and true labels from the test set that were predicted correctly. First, create a data frame with the sequence of mnist_prediction
using seq()
, prediction class from mnist_prediction
, and the label from the testing_set
. You can select a specific column in a data set by using $
. So for example, dataset$column
.
To find prediction and labels that match between mnist_prediction
and testing_set
, use the filter
function. Sample images using the sample_n
function.
Finally, use the show_digit
function we gave you above to print out the images. Scaffolding has been provided for you.
Question 4.3
{points: 3}
Print out 3 images and true labels from the test set that were NOT predicted correctly. For the incorrectly labelled images also print out the predicted labels. If you need help, refer to the instructions in Question 4.2.
Similar to the previous question, use the show_digit
function we gave you above to print out the images.
Question 4.4 True or False:
{points: 1}
The above images were predicted incorrectly due to messy handwriting. For example, the second image is illegible and actually looks like the letter "U".
Assign your answer to an object called answer4.4
. Make sure your answer is in lowercase and is surrounded by quotation marks (e.g. "true"
or "false"
).
Question 4.5
{points: 1}
Looking at the plot from Question 3.2, the accuracy was about 80%. This means that about 20% of the time, the classifier makes a mistake. Could you imagine all the deliveries that would be messed up if say Canada Post was to use this classifier? It would be a PR nightmare!
Which of the following is the best way we improve the classifier's accuracy?
A. Manually find and remove the messy handwriting from the dataset
B. Include more data, specifically, messy handwriting examples
C. Specify in the model specification to try more values of when tuning
D. None of the above
Assign your answer to an object called answer4.5
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).