Path: blob/master/lessons/lesson_06/code/starter-code/starter-code-7.ipynb
1904 views
Class 7- Starter code
Create sample data and fit a model
Cross validation
Intro to cross validation with bike share data from last time. We will be modeling casual ridership.
####Create dummy variables and set outcome (dependent) variable
Create a cross valiation with 5 folds
Check
While the cross validated approach here generated more overall error, which of the two approaches would predict new data more accurately: the single model or the cross validated, averaged one? Why?
Answer:
There are ways to improve our model with regularization.
Let's check out the effects on MSE and R2
Figuring out the alphas can be done by "hand"
Or we can use grid search to make this faster
Best score
mean squared error here comes in negative, so let's make it positive.
explains which grid_search setup worked best
shows all the grid pairings and their performances.
Gradient Descent
###Bonus: implement a stopping point, similar to what n_iter would do in gradient descent when we've reached "good enough"
##Demo: Application of Gradient Descent
###Check: Untuned, how well did gradient descent perform compared to OLS?
Answer:
#Independent Practice: Bike data revisited
There are tons of ways to approach a regression problem. The regularization techniques appended to ordinary least squares optimizes the size of coefficients to best account for error. Gradient Descent also introduces learning rate (how aggressively do we solve the problem), epsilon (at what point do we say the error margin is acceptable), and iterations (when should we stop no matter what?)
For this deliverable, our goals are to:
implement the gradient descent approach to our bike-share modeling problem,
show how gradient descent solves and optimizes the solution,
demonstrate the grid_search module!
While exploring the Gradient Descent regressor object, you'll build a grid search using the stochastic gradient descent estimator for the bike-share data set. Continue with either the model you evaluated last class or the simpler one from today. In particular, be sure to implement the "param_grid" in the grid search to get answers for the following questions:
With a set of alpha values between 10^-10 and 10^-1, how does the mean squared error change?
Based on the data, we know when to properly use l1 vs l2 regularization. By using a grid search with l1_ratios between 0 and 1 (increasing every 0.05), does that statement hold true? If not, did gradient descent have enough iterations?
How do these results change when you alter the learning rate (eta0)?
Bonus: Can you see the advantages and disadvantages of using gradient descent after finishing this exercise?