Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
UBC-DSCI
GitHub Repository: UBC-DSCI/dsci-100-assets
Path: blob/master/2020-fall/materials/tutorial_07/tutorial_07.ipynb
2051 views
Kernel: R

Tutorial 7: Classification (Part II)

Lecture and Tutorial Learning Goals:

After completing this week's lecture and tutorial work, you will be able to:

  • Describe what a test data set is and how it is used in classification.

  • Using R, evaluate classification accuracy using a test data set and appropriate metrics.

  • Using R, execute cross validation in R to choose the number of neighbours.

  • Identify when it is necessary to scale variables before classification and do this using R

  • In a dataset with > 2 attributes, perform k-nearest neighbour classification in R using the tidymodels package to predict the class of a test dataset.

  • Describe advantages and disadvantages of the k-nearest neighbour classification algorithm.

Handwritten Digit Classification using R

Source: https://media.giphy.com/media/UwrdbvJz1CNck/giphy.gif

MNIST is a computer vision dataset that consists of images of handwritten digits like these:

It also includes labels for each image, telling us which digit it is. For example, the labels for the above images are 5, 0, 4, and 1.

In this tutorial, we’re going to train a classifier to look at images and predict what digits they are. Our goal isn’t to train a really elaborate model that achieves state-of-the-art performance, but rather to dip a toe into using classification with pixelated images. As such, we’re going to keep working with the simple K-nearest neighbour classifier we have been exploring in the last two weeks.

Using image data for classification

As mentioned earlier, every MNIST data point has two parts: an image of a handwritten digit and a corresponding label. Both the training set and test set contain images and their corresponding labels.

Each image is 28 pixels by 28 pixels. We can interpret this as a big matrix of numbers:

We can flatten this matrix into a vector of 28x28 = 784 numbers and give it a class label (here 1 for the number one). It doesn’t matter how we flatten the array, as long as we’re consistent between images. From this perspective, the MNIST images are just a bunch of points in a 784-dimensional vector space, with a very rich structure.

We do this for every image of the digits we have, and we create a data table like the one shown below that we can use for classification. Note, like any other classification problem that we have seen before, we need many observations for each class. This problem is also a bit different from the first classification problem we have encountered (Wisonsin breast cancer data set), in that we have more than two classes (here we have 10 classes, one for each digit from 0 to 9).

This information is taken from: https://tensorflow.rstudio.com/tensorflow/articles/tutorial_mnist_beginners.html

### ### Run this cell before continuing. ### library(repr) library(tidyverse) library(tidymodels) source('tests_tutorial_07.R') source("cleanup_tutorial_07.R") # functions needed to work with images # code below sourced from: https://gist.github.com/daviddalpiaz/ae62ae5ccd0bada4b9acd6dbc9008706 # helper function for visualization show_digit = function(arr784, col = gray(12:1 / 12), ...) { image(matrix(as.matrix(arr784[-785]), nrow = 28)[, 28:1], col = col, ...) }

Question 1.0 Multiple Choice:
{points: 1}

How many rows and columns does the array of an image have?

A. 784 columns and 1 row

B. 28 columns and 1 row

C. 18 columns and 18 rows

D. 28 columns and 28 rows

Assign your answer to an object called answer1.0. Make sure the correct answer is an uppercase letter and to surround your answer with quotation marks (e.g. "F").

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_1.0()

Question 1.1 Multiple Choice:
{points: 1}

Once we linearize the array, how many rows represent a number?

A. 28

B. 784

C. 1

D. 18

Assign your answer to an object called answer1.1. Make sure the correct answer is an uppercase letter and to surround your answer with quotation marks (e.g. "F").

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_1.1()

2. Exploring the Data

Before we move on to do the modeling component, it is always required that we take a look at our data and understand the problem and the structure of the data well. We can start this part by loading the images and taking a look at the first rows of the dataset. You can load the data set by running the cell below.

# Load images. # Run this cell. training_data <- read_csv('data/mnist_train_small.csv') testing_data <- read_csv('data/mnist_test_small.csv')

Look at the first 6 rows of training_data. What do you notice?

head(training_data) dim(training_data)

There are no class labels! This data set has already been split into the X's (which you loaded above) and the labels. In addition, there is an extra "X" column which represents the row number (1, 2, 3...). Keep this in mind for now because we will remove it later on. Now, let's load the labels.

# Run this cell. training_labels <- read_csv('data/mnist_train_label_small.csv')['y'] %>% mutate(y = as_factor(y)) testing_labels <- read_csv('data/mnist_test_label_small.csv')['y'] %>% mutate(y = as_factor(y))

Look at the first 6 labels of training_labels using the head() function.

# Use this cell to view the first 6 labels. # Run this cell. head(training_labels) head(testing_labels)

Question 2.0
{points: 1}

How many rows does the training data set have? Note, each row is a different number in the postal code system.

Use nrow(). Note, the testing data set should have fewer rows than the training data set.

Assign your answer to an object called number_of_rows. Make sure your answer is a numeric and so it should not be surrounded by quotation marks.

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer number_of_rows
test_2.0()

Question 2.1
{points: 1}

For mutli-class classification with k-nn it is important for the classes to have about the same number of observations in each class. For example, if 90% of our training set observationas were labeled as 2's, then k-nn classification predict 2 almost every time and we would get an accuracy score of 90% even though our classifier wasn't really doing a great job.

Use the group_by and summarize function to get the counts for each group and see if the data set is balanced across the classes (has roughly equal numbers of observation for each class). Name the output counts. counts should be a data frame with 2 columns, y and n (the column n should have the counts for how many observations there were for each class group).

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer counts
test_2.1()

Question 2.2 True or False:
{points: 1}

The classes are not balanced. Some of them are many times larger or smaller than others.

Assign your answer to an object called answer2.2. Make sure your answer is in lowercase and is surrounded by quotation marks (e.g. "true" or "false")

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_2.2()

To view an image in the notebook, you can use the show_digit function (we gave you the code for this function in the first code cell in the notebook, All you have to do to use it is run the cell below). The show_digit function takes the row from the dataset whose image you want to produce, which you can obtain using the slice function.

The code we provide below will show you the image for the observation in the 200th row from the training data set.

# Run this cell to get the images for the 200th row from the training data set. options(repr.plot.height = 5, repr.plot.width = 5) show_digit(slice(training_data, 200))

Question 2.3
{points: 3}

Show the image for row 102.

options(repr.plot.height = 5, repr.plot.width = 5) # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer

If you are unsure as to what number the plot is depicting (because the handwriting is messy) you can use slice to get the label from the training_labels:

# run this cell to get the training label for the 200th row training_labels %>% slice(200)

Question 2.4
{points: 1}

What is the class label for row 102?

Assign your answer to an object called label_102.

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer label_102
test_2.4()

3. Splitting the Data

Question 3.0
{points: 1}

Currently, the data set and labels are split. The tidymodels package needs the data and labels to be married together. Use the bind_cols function to marry the data sets to their respective labels. Call the training data with its respective labels as training_set and the testing data with its respective labels as testing_set. Even though the data set has been split for you already, remember you need to have a training and testing data set when designing a k-nn classification model.

Also, remember in Section 2 that we told you to keep something in mind? To remind you, there is an extra "X" column on the far left which represents the row numbers (1, 2, 3, etc.) in the training_set. This column should not be used for training. Therefore, let's remove this column from the data set.

Hint: You can remove columns in a dataset using the select function and by putting a negative sign infront of the column you want to exclude (e.g. -X).

# Set the seed. Don't remove this! set.seed(9999) #... <- bind_cols(..., ...) %>% # for the training data # select(...) #... <- bind_cols(..., ...) # for the testing data # your code here fail() # No Answer - remove if you provide an answer
test_3.0()

Question 3.1
{points: 1}

We have already split the data into two datasets, one for training purposes and one for testing purposes. Is it important to split the data into a training and testing dataset when designing a knn classification model?

A. Yes because we can't evaluate how good our model is performing with data that has already been used to train our model

B. Yes because it is important to have a 75/25 split of the data for knn classification models

C. No because knn classification models only require the training dataset

D. No because after splitting, we don't even standardize the testing dataset so there's no point in even having a testing set

Assign your answer to an objected called answer3.1. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_3.1()

Which kk should we use?

As you learned from the worksheet, we can use cross-validation on the training data set to select which kk is the most optimal for our data set for k-nn classification.

Question 3.2
{points: 1}

To get all the marks in this question, you will have to:

  • Create a recipe that uses all predictors and a model specification with tuning on the number of neighbours

  • Perform a 5-fold cross-validation on the training set

  • Create a workflow analysis with your recipe and model specification and specify that the tuning should try 10 values of KK

  • Collect the metrics from the workflow analysis

  • Plot the kk vs the accuracy

    • Assign this plot to an object called cross_val_plot

# Set the seed. Don't remove this! set.seed(1234) options(repr.plot.height = 5, repr.plot.width = 6) # your code here fail() # No Answer - remove if you provide an answer
test_3.2()

Question 3.3 Multiple Choice:
{points: 1}

Based on the plot from Question 3.2, what is the best value of KK?

A. 3

B. 5

C. 2

D. 7

Assign your answer to an object called answer3.3. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_3.3()

4. Let's build our model

Question 4.0
{points: 1}

Now that we have explored our data, separated the data into training and testing sets (was technically done for you), and applied cross-validation to choose the best kk, we can build our final model.

First, build your model specification with the best value for KK. Assign your answer to an object called mnist_spec.

Then, pass the model specification and the training data set to the fit() function. Assign your answer to an object called mnist_fit.

# Set the seed. Don't remove this! set.seed(9999) # your code here fail() # No Answer - remove if you provide an answer
test_4.0()

Question 4.1
{points: 1}

Use your final model to predict on the test dataset and report the accuracy of this prediction.

# Set the seed. Don't remove this! set.seed(9999) # your code here fail() # No Answer - remove if you provide an answer
test_4.1()

Question 4.2
{points: 3}

Print out 3 images and true labels from the test set that were predicted correctly. First, create a data frame with the sequence of mnist_prediction using seq(), prediction class from mnist_prediction, and the label from the testing_set. You can select a specific column in a data set by using $. So for example, dataset$column.

To find prediction and labels that match between mnist_prediction and testing_set, use the filter function. Sample 33 images using the sample_n function.

Finally, use the show_digit function we gave you above to print out the images. Scaffolding has been provided for you.

# Set the seed. Don't remove this! set.seed(1000) # show_digit(slice(..., ...[1, ...])) # show_digit(slice(..., ...[2, ...])) # show_digit(slice(..., ...[3, ...])) options(repr.plot.height = 5, repr.plot.width = 5) # your code here fail() # No Answer - remove if you provide an answer

Question 4.3
{points: 3}

Print out 3 images and true labels from the test set that were NOT predicted correctly. For the incorrectly labelled images also print out the predicted labels. If you need help, refer to the instructions in Question 4.2.

Similar to the previous question, use the show_digit function we gave you above to print out the images.

# Set the seed. Don't remove this! set.seed(3500) options(repr.plot.height = 5, repr.plot.width = 5) # your code here fail() # No Answer - remove if you provide an answer

Question 4.4 True or False:
{points: 1}

The above images were predicted incorrectly due to messy handwriting. For example, the second image is illegible and actually looks like the letter "U".

Assign your answer to an object called answer4.4. Make sure your answer is in lowercase and is surrounded by quotation marks (e.g. "true" or "false").

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_4.4()

Question 4.5
{points: 1}

Looking at the plot from Question 3.2, the accuracy was about 80%. This means that about 20% of the time, the classifier makes a mistake. Could you imagine all the deliveries that would be messed up if say Canada Post was to use this classifier? It would be a PR nightmare!

Which of the following is the best way we improve the classifier's accuracy?

A. Manually find and remove the messy handwriting from the dataset

B. Include more data, specifically, messy handwriting examples

C. Specify in the model specification to try more values of KK when tuning

D. None of the above

Assign your answer to an object called answer4.5. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_4.5()
source("cleanup_tutorial_07.R")