Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
UBC-DSCI
GitHub Repository: UBC-DSCI/dsci-100-assets
Path: blob/master/2021-fall/materials/tutorial_09/tutorial_09.ipynb
2051 views
Kernel: R

Tutorial 9: Regression Continued

Regression learning objectives:

  • Recognize situations where a simple regression analysis would be appropriate for making predictions.

  • Explain the kk-nearest neighbour (kk-nn) regression algorithm and describe how it differs from k-nn classification.

  • Interpret the output of a kk-nn regression.

  • In a dataset with two variables, perform kk-nearest neighbour regression in R using tidymodels to predict the values for a test dataset.

  • Using R, execute cross-validation in R to choose the number of neighbours.

  • Using R, evaluate kk-nn regression prediction accuracy using a test data set and an appropriate metric (e.g., root means square prediction error).

  • In a dataset with > 2 variables, perform kk-nn regression in R using tidymodels to predict the values for a test dataset.

  • In the context of kk-nn regression, compare and contrast goodness of fit and prediction properties (namely RMSE vs RMSPE).

  • Describe advantages and disadvantages of the kk-nearest neighbour regression approach.

  • Perform ordinary least squares regression in R using tidymodels to predict the values for a test dataset.

  • Compare and contrast predictions obtained from kk-nearest neighbour regression to those obtained using simple ordinary least squares regression from the same dataset.

  • In R, overlay the ordinary least squares regression lines from geom_smooth on a single plot.

### Run this cell before continuing. library(tidyverse) library(testthat) library(digest) library(repr) library(tidymodels) library(GGally) library(ISLR) options(repr.matrix.max.rows = 6) source("tests_tutorial_09.R") source("cleanup_tutorial_09.R")

Predicting credit card balance

Source: https://media.giphy.com/media/LCdPNT81vlv3y/giphy-downsized-large.gif

Here in this worksheet we will work with a simulated data set that contains information that we can use to create a model to predict customer credit card balance. A bank might use such information to predict which customers might be the most profitable to lend to (customers who carry a balance, but do not default, for example).

Specifically, we wish to build a model to predict credit card balance (Balance column) based on income (Income column) and credit rating (Rating column).

We access this data set by accessing it from an R data package that we loaded at the beginning of the worksheet, ISLR. Loading that package gives access to a variety of data sets, including the Credit data set that we will be working with.

Credit

Question 1.1
{points: 1}

Select only the columns of data we are interested in using for our prediction (both the predictors and the response variable) and use the as_tibble function to convert it to a tibble (it is currently a base R data frame). Name the modified data frame credit (using a lowercase c).

Note: We could alternatively just leave these variables in and use our recipe formula below to specify our predictors and response. But for this worksheet, let's select the relevant columns first.

# your code here fail() # No Answer - remove if you provide an answer credit
test_1.1()

Question 1.2
{points: 1}

Before we perform exploratory data analysis, we should create our training and testing data sets. First, split the credit data set. Use 60% of the data and set the variables we want to predict as the strata argument. Assign your answer to an object called credit_split.

Assign your training data set to an object called credit_training and your testing data set to an object called credit_testing.

set.seed(2000) # your code here fail() # No Answer - remove if you provide an answer
test_1.2()

Question 1.3
{points: 1}

Using only the observations in the training data set, create a ggpairs scatterplot of all the columns we are interested in including in our model. Name the plot object credit_eda.

# your code here fail() # No Answer - remove if you provide an answer credit_eda
test_1.3()

Question 1.4 Multiple Choice:
{points: 1}

Looking at the ggpairs plot above, which of the following statements is incorrect?

A. There is a strong positive relationship between the response variable (Balance) and the Rating predictor

B. There is a strong positive relationship between the two predictors (Income and Rating)

C. There is a strong positive relationship between the response variable (Balance) and the Income predictor

D. None of the above statements are incorrect

Assign your answer to an object called answer1.4. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# your code here fail() # No Answer - remove if you provide an answer answer1.4
test_1.4()

Question 1.5
{points: 1}

Now that we have our training data, we will fit a linear regression model.

  • Create and assign your linear regression model specification to an object called lm_spec.

  • Create a recipe for the model. Assign your answer to an object called credit_recipe.

set.seed(2020) #DO NOT REMOVE # your code here fail() # No Answer - remove if you provide an answer print(lm_spec) print(credit_recipe)
test_1.5()

Question 1.6
{points: 1}

Now that we have our model specification and recipe, let's put them together in a workflow, and fit our simple linear regression model. Assign the fit to an object called credit_fit.

set.seed(2020) # DO NOT REMOVE # your code here fail() # No Answer - remove if you provide an answer credit_fit
test_1.6()

Question 1.7 Multiple Choice:
{points: 1}

Looking at the slopes/coefficients above from each of the predictors, which of the following mathematical equations is correct for your prediction model?

A. credit card balance=−531.116−7.960∗income+3.985∗credit card ratingcredit\: card \: balance = -531.116 -7.960*income + 3.985*credit\: card\: rating

B. credit card balance=−531.116+3.985∗income−7.960∗credit card ratingcredit\: card \: balance = -531.116 + 3.985*income -7.960*credit\: card\: rating

C. credit card balance=531.116−7.960∗income−3.985∗credit card ratingcredit\: card \: balance = 531.116 -7.960*income - 3.985*credit\: card\: rating

D. credit card balance=531.116−3.985∗income+7.960∗credit card ratingcredit\: card \: balance = 531.116 - 3.985*income + 7.960*credit\: card\: rating

Assign your answer to an object called answer1.7. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# your code here fail() # No Answer - remove if you provide an answer answer1.7
test_1.7()

Question 1.8
{points: 1}

Calculate the RMSERMSE to assess goodness of fit on credit_fit (remember this is how well it predicts on the training data used to fit the model). Return a single numerical value named lm_rmse.

set.seed(2020) # DO NOT REMOVE #... <- credit_fit %>% # predict(...) %>% # bind_cols(...) %>% # ...(truth = ..., estimate = ...) %>% # filter(.metric == ...) %>% # select(...) %>% # pull() # your code here fail() # No Answer - remove if you provide an answer lm_rmse
test_1.8()

Question 1.9
{points: 1}

Calculate RMSPERMSPE using the test data. Return a single numerical value named lm_rmspe.

set.seed(2020) # DO NOT REMOVE # your code here fail() # No Answer - remove if you provide an answer lm_rmspe
test_1.9()

Question 1.9.1
{points: 3}

Redo this analysis using kk-nn regression instead of linear regression. Use set.seed(2000) at the beginning of this code cell to make it reproducible. Use the same predictors and train - test data splits as you used for linear regression, and use 5-fold cross validation to choose kk from the range 1-20. Remember to scale and shift your predictors on your training data, and to apply that same standardization to your test data! Assign a single numeric value for RMSPERMSPE for your k-nn model as your answer, and name it knn_rmspe.

set.seed(2000) # DO NOT REMOVE # your code here fail() # No Answer - remove if you provide an answer knn_rmspe

Question 1.9.2
{points: 3}

Discuss which model, linear regression versus kk-nn regression, gives better predictions and why you think that might be happening.

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

2. Ames Housing Prices

Source: https://media.giphy.com/media/xUPGGuzpmG3jfeYWIg/giphy.gif

If we take a look at the Business Insider report What do millenials want in a home?, we can see that millenials like newer houses that have their own defined spaces. Today we are going to be looking at housing data to understand how the sale price of a house is determined. Finding highly detailed housing data with the final sale prices is very hard, however researchers from Truman State Univeristy have studied and made available a dataset containing multiple variables for the city of Ames, Iowa. The data set describes the sale of individual residential property in Ames, Iowa from 2006 to 2010. You can read more about the data set here. Today we will be looking at 5 different variables to predict the sale price of a house. These variables are:

  • Lot Area: lot_area

  • Year Built: year_built

  • Basement Square Footage: bsmt_sf

  • First Floor Square Footage: first_sf

  • Second Floor Square Footage: second_sf

First, load the data with the script given below.

# run this cell ames_data <- read_csv('data/ames.csv', col_types = cols()) %>% select(lot_area = Lot.Area, year_built = Year.Built, bsmt_sf = Total.Bsmt.SF, first_sf = `X1st.Flr.SF`, second_sf = `X2nd.Flr.SF`, sale_price = SalePrice) %>% filter(!is.na(bsmt_sf)) ames_data

Question 2.1
{points: 3}

Split the data into a train dataset and a test dataset, based on a 70%-30% train-test split. Use set.seed(2019). Remember that we want to predict the sale_price based on all of the other variables.

Assign the objects to ames_split, ames_training, and ames_testing, respectively.

Use 2019 as your seed for the split.

set.seed(2019) # DO NOT CHANGE! # your code here fail() # No Answer - remove if you provide an answer
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create objects named ames_split, ames_training and ames_testing', { expect_true(exists("ames_split")) expect_true(exists("ames_training")) expect_true(exists("ames_testing")) })

Question 2.2
{points: 3}

Let's start by exploring the training data. Use the ggpairs() function from the GGally package to explore the relationships between the different variables.

Assign your plot object to a variable named answer2.2.

set.seed(2020) # DO NOT REMOVE # your code here fail() # No Answer - remove if you provide an answer answer2.2
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create a plot named answer2.2', { expect_true(exists("answer2.2")) })

Question 2.3 Multiple Choice:
{points: 1}

Now that we have seen all the relationships between the variables, which of the following variables would not be a strong predictor for sale_price?

A. bsmt_sf

B. year_built

C. first_sf

D. lot_area

E. second_sf

F. It isn't clear from these plots

Assign your answer to an object called answer2.3. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# your code here fail() # No Answer - remove if you provide an answer answer2.3
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create an object called answer2.3', { expect_true(exists('answer2.3')) })

Question 2.4 - Linear Regression
{points: 3}

Fit a linear regression model using tidymodels with ames_training using all the variables in the data set.

  • create a model specification called lm_spec

  • create a recipe called ames_recipe

  • create a workflow with your model spec and recipe, and then create the model fit and name it ames_fit

set.seed(2020) # DO NOT REMOVE # your code here fail() # No Answer - remove if you provide an answer ames_fit
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create an object named lm_spec', { expect_true(exists("lm_spec")) }) test_that('Did not create an object named ames_recipe', { expect_true(exists("ames_recipe")) }) test_that('Did not create an object named ames_fit', { expect_true(exists("ames_fit")) })

Question 2.5 True or False:
{points: 1}

Aside from the intercept, all the variables have a positive relationship with the sale_price. This can be interpreted as the value of the variables decrease, the prices of the houses increase.

Assign your answer to an object called answer2.5. Make sure your answer is in lowercase letters and is surrounded by quotation marks (e.g. "true" or "false").

# your code here fail() # No Answer - remove if you provide an answer answer2.5
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create an object named answer2.5', { expect_true(exists("answer2.5")) })
# run this cell ames_fit$fit$fit$fit$coefficients

Question 2.6
{points: 3}

Looking at the coefficients and intercept produced from the cell block above, write down the equation for the linear model.

Make sure to use correct math typesetting syntax (i.e., surround your answer with dollar signs, a=ba = b)

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

Question 2.7 Multiple Choice:
{points: 1}

Why can we not easily visualize the model above as a line or a plane in a single plot?

A. This is not true, we can actually easily visualize the model

B. The intercept is much larger (6 digits) than the coefficients (single/double digits)

C. There are more than 2 predictors

D. None of the above

Assign your answer to an object called answer2.7. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# your code here fail() # No Answer - remove if you provide an answer answer2.7
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create an object named answer2.7', { expect_true(exists("answer2.7")) })

Question 2.8
{points: 3}

We need to evaluate how well our model is doing. For this question, calculate the RMSPERMSPE (a single numerical value) of the linear regression model using the test data set and assign it to an object named ames_rmspe.

set.seed(2020) # DO NOT REMOVE # your code here fail() # No Answer - remove if you provide an answer ames_rmspe
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create an object named ames_rmspe', { expect_true(exists("ames_rmspe")) })

Question 2.9 Multiple Choice:
{points: 1}

Which of the following statements is incorrect?

A. RMSERMSE is a measure of goodness of fit

B. RMSERMSE measures how well the model predicts on data it was trained with

C. RMSPERMSPE measures how well the model predicts on data it was not trained with

D. RMSPERMSPE measures how well the model predicts on data it was trained with

Assign your answer to an object called answer2.9. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F").

# your code here fail() # No Answer - remove if you provide an answer answer2.9
# We check that you've created objects with the right names below # But all other tests were intentionally hidden so that you can practice deciding # when you have the correct answer. test_that('Did not create an object named answer2.9', { expect_true(exists("answer2.9")) })
source("cleanup_tutorial_09.R")