Path: blob/master/07_linear_models/01_linear_regression_intro.ipynb
2923 views
Linear Regression - Introduction
Linear regression relates a continuous response (dependent) variable to one or more predictors (features, independent variables), using the assumption that the relationship is linear in nature:
The relationship between each feature and the response is a straight line when we keep other features constant.
The slope of this line does not depend on the values of the other variables.
The effects of each variable on the response are additive (but we can include new variables that represent the interaction of two variables).
In other words, the model assumes that the response variable can be explained or predicted by a linear combination of the features, except for random deviations from this linear relationship.
Imports & Settings
Simple Regression
Generate random data
Our linear model with a single independent variable on the left-hand side assumes the following form:
accounts for the deviations or errors that we will encounter when our data do not actually fit a straight line. When materializes, that is when we run the model of this type on actual data, the errors are called residuals.
Estimate a simple regression with statsmodels
The upper part of the summary displays the dataset characteristics, namely the estimation method, the number of observations and parameters, and indicates that standard error estimates do not account for heteroskedasticity.
The middle panel shows the coefficient values that closely reflect the artificial data generating process. We can confirm that the estimates displayed in the middle of the summary result can be obtained using the OLS formula derived previously:
Verify calculation
Display model & residuals
Multiple Regression
For two independent variables, the model simply changes as follows:
Generate new random data
Estimate multiple regression model with statsmodels
The upper right part of the panel displays the goodness-of-fit measures just discussed, alongside the F-test that rejects the hypothesis that all coefficients are zero and irrelevant. Similarly, the t-statistics indicate that intercept and both slope coefficients are, unsurprisingly, highly significant.
The bottom part of the summary contains the residual diagnostics. The left panel displays skew and kurtosis that are used to test the normality hypothesis. Both the Omnibus and the Jarque—Bera test fails to reject the null hypothesis that the residuals are normally distributed. The Durbin—Watson statistic tests for serial correlation in the residuals and has a value near 2 which, given 2 parameters and 625 observations, fails to reject the hypothesis of no serial correlation.
Lastly, the condition number provides evidence about multicollinearity: it is the ratio of the square roots of the largest and the smallest eigenvalue of the design matrix that contains the input data. A value above 30 suggests that the regression may have significant multicollinearity.
Verify computation
Save output as image
Display model & residuals
The following diagram illustrates the hyperplane fitted by the model to the randomly generated data points
Additional diagnostic tests
Stochastic Gradient Descent Regression
The sklearn library includes an SGDRegressor model in its linear_models module. To learn the parameters for the same model using this method, we need to first standardize the data because the gradient is sensitive to the scale.
Prepare data
The gradient is sensitive to scale and so is SGDRegressor. Use the StandardScaler
or scale
to adjust the features.
We use StandardScaler() for this purpose that computes the mean and the standard deviation for each input variable during the fit step, and then subtracts the mean and divides by the standard deviation during the transform step that we can conveniently conduct in a single fit_transform() command:
Configure SGDRegressor
Then we instantiate the SGDRegressor using the default values except for a random_state setting to facilitate replication:
Fit Model
Now we can fit the sgd model, create the in-sample predictions for both the OLS and the sgd models, and compute the root mean squared error for each:
As expected, both models yield the same result. We will now take on a more ambitious project using linear regression to estimate a multi-factor asset pricing model.