Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/ML/7. Polynomial-Regression.ipynb
3074 views
Kernel: Python 3

Polynomial Regression

Regression is a method in which : “Using the relationship between variables to find the best fit line or the regression equation that can be used to make predictions” image.png

We use the polynomial regression to fit a polynomial line so that we can achieve a minimum error or minimum cost function

Equation : y = θo + θ₁x₁ + θ₂ x₁²

Advantages of using Polynomial Regression:

  • Polynomial provides the best approximation of the relationship between the dependent and independent variable.

  • A Broad range of function can be fit under it.

  • Polynomial basically fits a wide range of curvature.

Disadvantages of using Polynomial Regression

  • The presence of one or two outliers in the data can seriously affect the results of the nonlinear analysis.

  • These are too sensitive to the outliers.

  • In addition, there are unfortunately fewer model validation tools for the detection of outliers in nonlinear regression than there are for linear regression.

image.png

Importing Needed packages

import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline

Understanding the Data

FuelConsumption.csv:

FuelConsumption.csv, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada.

  • MODELYEAR e.g. 2014

  • MAKE e.g. Acura

  • MODEL e.g. ILX

  • VEHICLE CLASS e.g. SUV

  • ENGINE SIZE e.g. 4.7

  • CYLINDERS e.g 6

  • TRANSMISSION e.g. A6

  • FUEL CONSUMPTION in CITY(L/100 km) e.g. 9.9

  • FUEL CONSUMPTION in HWY (L/100 km) e.g. 8.9

  • FUEL CONSUMPTION COMB (L/100 km) e.g. 9.2

  • CO2 EMISSIONS (g/km) e.g. 182 --> low --> 0

Reading the data in

df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head()

Lets select some features that we want to use for regression.

cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9)

Lets plot Emission values with respect to Engine size:

plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='green') plt.xlabel("FUELCONSUMPTION_COMB") plt.ylabel("Emission") plt.show()
Image in a Jupyter notebook

Creating train and test dataset

Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.

msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk]

Polynomial regression

Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.

In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):

y=b+θ1x+θ2x2y = b + \theta_1 x + \theta_2 x^2

Now, the question is: how we can fit our data on this equation while we have only x values, such as Engine Size? Well, we can create a few additional features: 1, xx, and x2x^2.

PloynomialFeatures() function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, ENGINESIZE. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:

from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) poly = PolynomialFeatures(degree=2) train_x_poly = poly.fit_transform(train_x) train_x_poly
array([[ 1. , 2. , 4. ], [ 1. , 2.4 , 5.76], [ 1. , 1.5 , 2.25], ..., [ 1. , 3. , 9. ], [ 1. , 3.2 , 10.24], [ 1. , 3.2 , 10.24]])

fit_transform takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2).

[v1v2vn] \begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n \end{bmatrix} \longrightarrow [[1v1v12][1v2v22][1vnvn2]] \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2] \end{bmatrix}

in our example

[2.2.41.5] \begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots \end{bmatrix} \longrightarrow [[12.4.][12.45.76][11.52.25]] \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\ \end{bmatrix}

It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the xx with x1x_1, x12x_1^2 with x2x_2, and so on. Then the degree 2 equation would be turn into:

y=b+θ1x1+θ2x2y = b + \theta_1 x_1 + \theta_2 x_2

Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems.

so we can use LinearRegression() function to solve it:

clf = linear_model.LinearRegression() train_y_ = clf.fit(train_x_poly, train_y) # The coefficients print ('Coefficients: ', clf.coef_) print ('Intercept: ',clf.intercept_)
Coefficients: [[ 0. 48.32645273 -1.15349522]] Intercept: [110.17756348]

As mentioned before, Coefficient and Intercept , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:

plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='green') XX = np.arange(0.0, 10.0, 0.1) yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission")
Text(0, 0.5, 'Emission')
Image in a Jupyter notebook

Evaluation

from sklearn.metrics import r2_score test_x_poly = poly.fit_transform(test_x) test_y_ = clf.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y_ , test_y) )

Practice

Try to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
# write your code here poly3 = PolynomialFeatures(degree=3) train_x_poly3 = poly3.fit_transform(train_x) clf3 = linear_model.LinearRegression() train_y3_ = clf3.fit(train_x_poly3, train_y) # The coefficients print ('Coefficients: ', clf3.coef_) print ('Intercept: ',clf3.intercept_) plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") test_x_poly3 = poly3.fit_transform(test_x) test_y3_ = clf3.predict(test_x_poly3) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y3_ , test_y) )
Coefficients: [[ 0. 27.91096183 4.56095843 -0.4822575 ]] Intercept: [131.57092413] Mean absolute error: 22.62 Residual sum of squares (MSE): 869.91 R2-score: 0.71
Image in a Jupyter notebook