Path: blob/master/ML/Notebook/Linear Regression.ipynb
3087 views
Definition
Linear regression is a statistical approach for modelling relationship between a dependent variable with a given set of independent variables.
Simple Linear Regression
Simple linear regression is an approach for predicting a response using a single feature.
WHY Linear Regression?
To find the parameters so that the model best fits the data.
How do we determine the best fit line?
The line for which the the error between the predicted values and the observed values is minimum is called the best fit line or the regression line. These errors are also called as residuals.
The residuals can be visualized by the vertical lines from the observed data value to the regression line.
Simple Linear Regression
Lets assume that the two variables are linearly related.
Find a linear function that predicts the response value(y) as accurately as possible as a function of the feature or independent variable(x).
x = [9, 10, 11, 12, 10, 9, 9, 10, 12, 11]
y = [10, 11, 14, 13, 15, 11, 12, 11, 13, 15]
x as feature vector, i.e x = [x_1, x_2, …., x_n],
y as response vector, i.e y = [y_1, y_2, …., y_n]
for n observations (in above example, n=10).
Now, the task is to find a line which fits best in above scatter plot so that we can predict the response for any new feature values. (i.e a value of x not present in dataset) This line is called regression line.
The equation of regression line is represented as:
Here,
h(xi) represents the predicted response value for ith observation. b(0) and b(1) are regression coefficients and represent y-intercept and slope of regression line respectively.
where (SSxx) is the sum of cross-deviations of y and x:
Multiple linear regression
Multiple linear regression attempts to model the relationship between two or more features and a response by fitting a linear equation to observed data.
Clearly, it is nothing but an extension of Simple linear regression.
Consider a dataset with p features(or independent variables) and one response(or dependent variable). Also, the dataset contains n rows/observations. The regression line for p features is represented as: where h(x_i) is predicted response value for ith observation and b_0, b_1, …, b_p are the regression coefficients.
Scikit -Learn
A library for machine learning for python language
Contains tools for machine learning algorithm and stats modelling
Installation
conda install scikit-learn
Lets build the regression model. First, let’s try a model with only one variable. We want to predict the mileage per gallon by looking at the horsepower of a car.
Evaluation metrics for linear regression are mean squared error and the R² score.
Insights
The best possible score is 1.0, We get a model with a mean squared error of 28.66 and an R² of0.59. Not so good
Let’s add more variables to the model weight and cylinders(Multi)
Insights
Now our Model is better as R²= 0.72