Path: blob/master/lessons/lesson_05/code/starter-code/starter-code-6.ipynb
1904 views
Lesson 6 - Starter Code
Explore our mammals dataset
Lets check out a scatter plot of body wieght and brain weight
Guided Practice: Using Seaborn to generate single variable linear model plots (15 mins)
Update and complete the code below to use lmplot and display correlations between body weight and two dependent variables: sleep_rem and awake.
Complete below for sleep_rem and awake as a y, with variables you've already used as x.
File "<ipython-input-1-9015c725455f>", line 1
x =
^
SyntaxError: invalid syntax
Introduction: Single Regression Analysis in statsmodels & scikit (10 mins)
Use Statsmodels to make the prediction
Repeat in Scikit with handy plotting
When modeling with sklearn, you'll use the following base principals:
All sklearn estimators (modeling classes) are based on this base estimator. This allows you to easily rotate through estimators without changing much code.
All estimators take a matrix, X, either sparse or dense.
Many estimators also take a vector, y, when working on a supervised machine learning problem. Regressions are supervised learning because we already have examples of y given X.
All estimators have parameters that can be set. This allows for customization and higher level of detail to the learning process. The parameters are appropriate to each estimator algorithm.
Demo: Significance is Key (20 mins)
What does our output tell us?
Our output tells us that:
The relationship between bodywt and brainwt isn't random (p value approaching 0)
The model explains, roughly, 87% of the variance of the dataset (the largest errors being in the large brain and body sizes)
With this current model, brainwt is roughly bodywt * 0.00096395
The residuals, or error in the prediction, is not normal, with outliers on the right. A better with will have similar to normally distributed error.
Evaluating Fit, Evaluating Sense
Although we know there is a better solution to the model, we should evaluate some other sense things first. For example, given this model, what is an animal's brainwt if their bodywt is 0?
Intrepretation?
Answer:
Guided Practice: Using the LinearRegression object (15 mins)
We learned earlier that the data in its current state does not allow for the best linear regression fit.
With a partner, generate two more models using the log-transformed data to see how this transform changes the model's performance.
Complete the following code to update X and y to match the log-transformed data.
Complete the loop by setting the list to be one True and one False.
Which model performed the best? The worst? Why?
Answer:
Advanced Methods!
We will go over different estimators in detail in the future but check it out in the docs if you're curious (and finish a little early)
Introduction: Multiple Regression Analysis using citi bike data (10 minutes)
In the previous example, one variable explained the variance of another; however, more often than not, we will need multiple variables.
For example, a house's price may be best measured by square feet, but a lot of other variables play a vital role: bedrooms, bathrooms, location, appliances, etc.
For a linear regression, we want these variables to be largely independent of each other, but all of them should help explain the Y variable.
We'll work with bikeshare data to showcase what this means and to explain a concept called multicollinearity.
What is Multicollinearity?
With the bike share data, let's compare three data points: actual temperature, "feel" temperature, and guest ridership.
Our data is already normalized between 0 and 1, so we'll start off with the correlations and modeling.
What does the correlation matrix explain?
Answer:
We can measure this effect in the coefficients:
Intrepretation?
Answer:
What happens if we use a second variable that isn't highly correlated with temperature, like humidity?
Guided Practice: Multicollinearity with dummy variables (15 mins)
There can be a similar effect from a feature set that is a singular matrix, which is when there is a clear relationship in the matrix (for example, the sum of all rows = 1).
Run through the following code on your own.
What happens to the coefficients when you include all weather situations instead of just including all except one?
Similar in Statsmodels
What's the interpretation ? Do you want to keep all your dummy variables or drop one? Why?
Answer:
Guided Practice: Combining non-correlated features into a better model (15 mins)
With a partner, complete this code together and visualize the correlations of all the numerical features built into the data set.
We want to:
Add the three significant weather situations into our current model.
Find two more features that are not correlated with current features, but could be strong indicators for predicting guest riders.
Independent Practice: Building models for other y variables (25 minutes)
We've completely a model together that explains casual guest riders. Now it's your turn to build another model, using a different y variable: registered riders.
Pay attention to:
the distribution of riders (should we rescale the data?)
checking correlations with variables and registered riders
having a feature space (our matrix) with low multicollinearity
model complexity vs explanation of variance: at what point do features in a model stop improving r-squared?
the linear assumption -- given all feature values being 0, should we have no ridership? negative ridership? positive ridership?
Bonus
Which variables would make sense to dummy (because they are categorical, not continuous)?
What features might explain ridership but aren't included in the data set?
Is there a way to build these using pandas and the features available?
Outcomes If your model at least improves upon the original model and the explanatory effects (coefficients) make sense, consider this a complete task.