Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/09. Machine Learning with Python/02. Regression/01. Simple Linear Regression.ipynb
Views: 4598
Simple Linear Regression
Objectives
After completing this lab you will be able to:
Use scikit-learn to implement simple Linear Regression
Create a model, train it, test it and use the model
Importing Needed packages
Downloading Data
To download the data, we can use pd.read_csv()
Understanding the Data
We have downloaded a fuel consumption dataset, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. Dataset source
MODELYEAR e.g. 2014
MAKE e.g. Acura
MODEL e.g. ILX
VEHICLE CLASS e.g. SUV
ENGINE SIZE e.g. 4.7
CYLINDERS e.g 6
TRANSMISSION e.g. A6
FUEL CONSUMPTION in CITY(L/100 km) e.g. 9.9
FUEL CONSUMPTION in HWY (L/100 km) e.g. 8.9
FUEL CONSUMPTION COMB (L/100 km) e.g. 9.2
CO2 EMISSIONS (g/km) e.g. 182 --> low --> 0
Data Exploration
Let's first have a descriptive exploration on our data.
Let's select some features to explore more.
We can plot each of these features:
Now, let's plot each of these features against the Emission, to see how linear their relationship is:
Practice
Plot CYLINDER vs the Emission, to see how linear is their relationship is:
Creating train and test dataset
Train/Test Split involves splitting the dataset into training and testing sets that are mutually exclusive. After which, you train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the model. Therefore, it gives us a better understanding of how well our model generalizes on new data.
This means that we know the outcome of each data point in the testing dataset, making it great to test with! Since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.
Let's split our dataset into train and test sets. 80% of the entire dataset will be used for training and 20% for testing. We create a mask to select random rows using np.random.rand() function:
Simple Regression Model
Linear Regression fits a linear model with coefficients B = (B1, ..., Bn) to minimize the 'residual sum of squares' between the actual value y in the dataset, and the predicted value yhat using linear approximation.
Train data distribution
Modeling
Using sklearn package to model data.
As mentioned before, Coefficient and Intercept in the simple linear regression, are the parameters of the fit line. Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data. Notice that all of the data must be available to traverse and calculate the parameters.
Plot outputs
We can plot the fit line over the data:
Evaluation
We compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.
There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:
Mean Absolute Error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.
Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean Absolute Error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.
Root Mean Squared Error (RMSE).
R-squared is not an error, but rather a popular metric to measure the performance of your regression model. It represents how close the data points are to the fitted regression line. The higher the R-squared value, the better the model fits your data. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
Exercise
Lets see what the evaluation metrics are if we trained a regression model using the FUELCONSUMPTION_COMB
feature.
Start by selecting FUELCONSUMPTION_COMB
as the train_x data from the train
dataframe, then select FUELCONSUMPTION_COMB
as the test_x data from the test
dataframe
Now train a Logistic Regression Model using the train_x
you created and the train_y
created previously
Find the predictions using the model's predict
function and the test_x
data
Finally use the predictions
and the test_y
data and find the Mean Absolute Error value using the np.absolute
and np.mean
function like done previously
We can see that the MAE is much worse than it is when we train using ENGINESIZE