Path: blob/master/ML/2.Linear Regression(multiple).ipynb
3074 views
Modelling using Linear Regression
Definition
Linear regression is a statistical approach for modelling relationship between a dependent variable with a given set of independent variables.
Simple Linear Regression
Simple linear regression is an approach for predicting a response using a single feature.
WHY Linear Regression?
To find the parameters so that the model best fits the data.
Forecasting an effect
Determing a Trend
How do we determine the best fit line?
The line for which the the error between the predicted values and the observed values is minimum is called the best fit line or the regression line. These errors are also called as residuals.
The residuals can be visualized by the vertical lines from the observed data value to the regression line.
Simple Linear Regression
Lets assume that the two variables are linearly related.
Find a linear function that predicts the response value(y) as accurately as possible as a function of the feature or independent variable(x).
x = [9, 10, 11, 12, 10, 9, 9, 10, 12, 11]
y = [10, 11, 14, 13, 15, 11, 12, 11, 13, 15]
x as feature vector, i.e x = [x_1, x_2, …., x_n],
y as response vector, i.e y = [y_1, y_2, …., y_n]
for n observations (in above example, n=10).
Now, the task is to find a line which fits best in above scatter plot so that we can predict the response for any new feature values. (i.e a value of x not present in dataset) This line is called regression line.
The equation of regression line is represented as:
Here,
h(xi) represents the predicted response value for ith observation. b(0) and b(1) are regression coefficients and represent y-intercept and slope of regression line respectively.
where (SSxx) is the sum of cross-deviations of y and x:
Multiple linear regression
Multiple linear regression attempts to model the relationship between two or more features and a response by fitting a linear equation to observed data.
Clearly, it is nothing but an extension of Simple linear regression.
Consider a dataset with p features(or independent variables) and one response(or dependent variable). Also, the dataset contains n rows/observations. The regression line for p features is represented as: where h(x_i) is predicted response value for ith observation and b_0, b_1, …, b_p are the regression coefficients.
Scikit -Learn
A library for machine learning for python language
Contains tools for machine learning algorithm and stats modelling
Installation
conda install scikit-learn
link to download pandas profiling module:
conda install -c conda-forge pandas-profiling
Modelling: (X_train, Y_train)
Ypred=X_test
Ytest=Yactual
Ypred-Yactual(ERROR)
Feature Importance for selection
One of the assumptions of linear regression is that the independent variables need to be uncorrelated with each other. If these variables are correlated with each other, then we need to keep only one of them and drop the rest.
The correlation coefficient has values between -1 to 1
A value closer to 0 implies weaker correlation (exact 0 implying no correlation)
A value closer to 1 implies stronger positive correlation
A value closer to -1 implies stronger negative correlation
We see that all the above features are highly correlated.We can consider all above features
One of the assumptions of linear regression is that the independent variables need to be uncorrelated with each other. If these variables are correlated with each other, then we need to keep only one of them and drop the rest.
Correlation of Selected features with each other
RFE (Recursive Feature Elimination)
The Recursive Feature Elimination (RFE) method works by recursively removing attributes and building a model on those attributes that remain. It uses accuracy metric to rank the feature according to their importance. The RFE method takes the model to be used and the number of required features as input. It then gives the ranking of all the variables, 1 being most important. It also gives its support, True being relevant feature and False being irrelevant feature.
Lets build the regression model. First, let’s try a model with only one variable.
Evaluation metrics for linear regression are mean squared error and the R² score.
Evaluating the Algorithm
The final step is to evaluate the performance of algorithm. This step is particularly important to compare how well different algorithms perform on a particular dataset. For regression algorithms, three evaluation metrics are commonly used:
Mean Absolute Error (MAE) is the mean of the absolute value of the errors
Mean Squared Error (MSE) is the mean of the squared errors
*Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors
Linear regression calculates an equation that minimizes the distance between the fitted line and all of the data points.
R-squared is a statistical measure of how close the data are to the fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression.
R-squared = Explained variation / Total variation
R-squared is always between 0 and 100%: 0% indicates that the model explains none of the variability of the response data around its mean. 100% indicates that the model explains all the variability of the response data around its mean. In general, the higher the R-squared, the better the model fits your data.
Insights
The best possible score is 1.0, We get a model with a mean squared error of 27 and an R² of0.56. Not so good
Let’s add more variables to the model weight and cylinders(Multi)
Insights
Now our Model is better as R²= 0.67
X_new=np.array([6,180,4603,21.5]) y_new=reg.predict(X_new)
Performance Improvement by Cross validation
In this approach, we reserve 50% of the dataset for validation and the remaining 50% for model training. However, a major disadvantage of this approach is that since we are training a model on only 50% of the dataset, there is a huge possibility that we might miss out on some interesting information about the data which will lead to a higher bias