Path: blob/master/lessons/lesson_06/code/practice/train_test-cross_validation- done.ipynb
1904 views
Train-test Split and Cross-Validation Lab
Authors: Joseph Nelson (DC), Kiefer Katovich (SF)
Review of train/test validation methods
We've discussed overfitting, underfitting, and how to validate the "generalizeability" of your models by testing them on unseen data.
In this lab you'll practice two related validation methods:
train/test split
k-fold cross-validation
Train/test split and k-fold cross-validation both serve two useful purposes:
We prevent overfitting by not using all the data, and
We retain some remaining data to evaluate our model.
In the case of cross-validation, the model fitting and evaluation is performed multiple times on different train/test splits of the data.
Ultimately we can the training and testing validation framework to compare multiple models on the same dataset. This could be comparisons of two linear models, or of completely different models on the same data.
Instructions
For your independent practice, fit three different models on the Boston housing data. For example, you could pick three different subsets of variables, one or more polynomial models, or any other model that you like.
Start with train/test split validation:
Fix a testing/training split of the data
Train each of your models on the training data
Evaluate each of the models on the test data
Rank the models by how well they score on the testing data set.
Then try K-Fold cross-validation:
Perform a k-fold cross validation and use the cross-validation scores to compare your models. Did this change your rankings?
Try a few different K-splits of the data for the same models.
If you're interested, try a variety of response variables. We start with MEDV (the .target
attribute from the dataset load method).
1. Clean up any data problems
Load the Boston housing data. Fix any problems, if applicable.
2. Select 3-4 variables with your dataset to perform a 50/50 test train split on
Use sklearn.
Score and plot your predictions.
3. Try 70/30 and 90/10
Score and plot.
How do your metrics change?
4. Try K-Folds cross-validation with K between 5-10 for your regression.
What seems optimal?
How do your scores change?
What the variance of scores like?
Try different folds to get a sense of how this impacts your score.
5. [Bonus] optimize the score
Can you optimize your R^2 by selecting the best features and validating the model using either train/test split or K-Folds?
Your code will need to iterate through the different combinations of predictors, cross-validate the current model parameterization, and determine which set of features performed best.
The number of K-folds is up to you.
Hint: the
itertools
package is useful for combinations and permutations.
5.1 Can you explain what could be wrong with this approach?
6. [Bonus] Explore another target variable and practice patsy
formulas
Can you find another response variable, given a combination of predictors, that can be predicted accurately through the exploration of different predictors in this dataset?
Try out using patsy to construct your target and predictor matrices from formula strings.
Tip: Check out pairplots, coefficients, and pearson scores.