Path: blob/master/2021-fall/materials/tutorial_09/tutorial_09.ipynb
2051 views
Tutorial 9: Regression Continued
Regression learning objectives:
Recognize situations where a simple regression analysis would be appropriate for making predictions.
Explain the -nearest neighbour (-nn) regression algorithm and describe how it differs from k-nn classification.
Interpret the output of a -nn regression.
In a dataset with two variables, perform -nearest neighbour regression in R using
tidymodels
to predict the values for a test dataset.Using R, execute cross-validation in R to choose the number of neighbours.
Using R, evaluate -nn regression prediction accuracy using a test data set and an appropriate metric (e.g., root means square prediction error).
In a dataset with > 2 variables, perform -nn regression in R using
tidymodels
to predict the values for a test dataset.In the context of -nn regression, compare and contrast goodness of fit and prediction properties (namely RMSE vs RMSPE).
Describe advantages and disadvantages of the -nearest neighbour regression approach.
Perform ordinary least squares regression in R using
tidymodels
to predict the values for a test dataset.Compare and contrast predictions obtained from -nearest neighbour regression to those obtained using simple ordinary least squares regression from the same dataset.
In R, overlay the ordinary least squares regression lines from
geom_smooth
on a single plot.
Predicting credit card balance
Source: https://media.giphy.com/media/LCdPNT81vlv3y/giphy-downsized-large.gif
Here in this worksheet we will work with a simulated data set that contains information that we can use to create a model to predict customer credit card balance. A bank might use such information to predict which customers might be the most profitable to lend to (customers who carry a balance, but do not default, for example).
Specifically, we wish to build a model to predict credit card balance (Balance
column) based on income (Income
column) and credit rating (Rating
column).
We access this data set by accessing it from an R data package that we loaded at the beginning of the worksheet, ISLR
. Loading that package gives access to a variety of data sets, including the Credit
data set that we will be working with.
Question 1.1
{points: 1}
Select only the columns of data we are interested in using for our prediction (both the predictors and the response variable) and use the as_tibble
function to convert it to a tibble (it is currently a base R data frame). Name the modified data frame credit
(using a lowercase c).
Note: We could alternatively just leave these variables in and use our recipe formula below to specify our predictors and response. But for this worksheet, let's select the relevant columns first.
Question 1.2
{points: 1}
Before we perform exploratory data analysis, we should create our training and testing data sets. First, split the credit
data set. Use 60% of the data and set the variables we want to predict as the strata
argument. Assign your answer to an object called credit_split
.
Assign your training data set to an object called credit_training
and your testing data set to an object called credit_testing
.
Question 1.3
{points: 1}
Using only the observations in the training data set, create a ggpairs
scatterplot of all the columns we are interested in including in our model. Name the plot object credit_eda
.
Question 1.4 Multiple Choice:
{points: 1}
Looking at the ggpairs
plot above, which of the following statements is incorrect?
A. There is a strong positive relationship between the response variable (Balance
) and the Rating
predictor
B. There is a strong positive relationship between the two predictors (Income
and Rating
)
C. There is a strong positive relationship between the response variable (Balance
) and the Income
predictor
D. None of the above statements are incorrect
Assign your answer to an object called answer1.4
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 1.5
{points: 1}
Now that we have our training data, we will fit a linear regression model.
Create and assign your linear regression model specification to an object called
lm_spec
.Create a recipe for the model. Assign your answer to an object called
credit_recipe
.
Question 1.6
{points: 1}
Now that we have our model specification and recipe, let's put them together in a workflow, and fit our simple linear regression model. Assign the fit to an object called credit_fit
.
Question 1.7 Multiple Choice:
{points: 1}
Looking at the slopes/coefficients above from each of the predictors, which of the following mathematical equations is correct for your prediction model?
A.
B.
C.
D.
Assign your answer to an object called answer1.7
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 1.8
{points: 1}
Calculate the to assess goodness of fit on credit_fit
(remember this is how well it predicts on the training data used to fit the model). Return a single numerical value named lm_rmse
.
Question 1.9
{points: 1}
Calculate using the test data. Return a single numerical value named lm_rmspe
.
Question 1.9.1
{points: 3}
Redo this analysis using -nn regression instead of linear regression. Use set.seed(2000)
at the beginning of this code cell to make it reproducible. Use the same predictors and train - test data splits as you used for linear regression, and use 5-fold cross validation to choose from the range 1-20. Remember to scale and shift your predictors on your training data, and to apply that same standardization to your test data! Assign a single numeric value for for your k-nn model as your answer, and name it knn_rmspe
.
Question 1.9.2
{points: 3}
Discuss which model, linear regression versus -nn regression, gives better predictions and why you think that might be happening.
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
2. Ames Housing Prices
Source: https://media.giphy.com/media/xUPGGuzpmG3jfeYWIg/giphy.gif
If we take a look at the Business Insider report What do millenials want in a home?, we can see that millenials like newer houses that have their own defined spaces. Today we are going to be looking at housing data to understand how the sale price of a house is determined. Finding highly detailed housing data with the final sale prices is very hard, however researchers from Truman State Univeristy have studied and made available a dataset containing multiple variables for the city of Ames, Iowa. The data set describes the sale of individual residential property in Ames, Iowa from 2006 to 2010. You can read more about the data set here. Today we will be looking at 5 different variables to predict the sale price of a house. These variables are:
Lot Area:
lot_area
Year Built:
year_built
Basement Square Footage:
bsmt_sf
First Floor Square Footage:
first_sf
Second Floor Square Footage:
second_sf
First, load the data with the script given below.
Question 2.1
{points: 3}
Split the data into a train dataset and a test dataset, based on a 70%-30% train-test split. Use set.seed(2019)
. Remember that we want to predict the sale_price
based on all of the other variables.
Assign the objects to ames_split
, ames_training
, and ames_testing
, respectively.
Use 2019 as your seed for the split.
Question 2.2
{points: 3}
Let's start by exploring the training data. Use the ggpairs()
function from the GGally package to explore the relationships between the different variables.
Assign your plot object to a variable named answer2.2
.
Question 2.3 Multiple Choice:
{points: 1}
Now that we have seen all the relationships between the variables, which of the following variables would not be a strong predictor for sale_price
?
A. bsmt_sf
B. year_built
C. first_sf
D. lot_area
E. second_sf
F. It isn't clear from these plots
Assign your answer to an object called answer2.3
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 2.4 - Linear Regression
{points: 3}
Fit a linear regression model using tidymodels
with ames_training
using all the variables in the data set.
create a model specification called
lm_spec
create a recipe called
ames_recipe
create a workflow with your model spec and recipe, and then create the model fit and name it
ames_fit
Question 2.5 True or False:
{points: 1}
Aside from the intercept, all the variables have a positive relationship with the sale_price
. This can be interpreted as the value of the variables decrease, the prices of the houses increase.
Assign your answer to an object called answer2.5
. Make sure your answer is in lowercase letters and is surrounded by quotation marks (e.g. "true"
or "false"
).
Question 2.6
{points: 3}
Looking at the coefficients and intercept produced from the cell block above, write down the equation for the linear model.
Make sure to use correct math typesetting syntax (i.e., surround your answer with dollar signs, )
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
Question 2.7 Multiple Choice:
{points: 1}
Why can we not easily visualize the model above as a line or a plane in a single plot?
A. This is not true, we can actually easily visualize the model
B. The intercept is much larger (6 digits) than the coefficients (single/double digits)
C. There are more than 2 predictors
D. None of the above
Assign your answer to an object called answer2.7
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 2.8
{points: 3}
We need to evaluate how well our model is doing. For this question, calculate the (a single numerical value) of the linear regression model using the test data set and assign it to an object named ames_rmspe
.
Question 2.9 Multiple Choice:
{points: 1}
Which of the following statements is incorrect?
A. is a measure of goodness of fit
B. measures how well the model predicts on data it was trained with
C. measures how well the model predicts on data it was not trained with
D. measures how well the model predicts on data it was trained with
Assign your answer to an object called answer2.9
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).