Path: blob/master/Machine Learning Supervised Methods/SVM for regression.ipynb
3074 views
Support vector regression (SVR) is a statistical method that examines the linear relationship between two continuous variables.
In regression problems, we generally try to find a line that best fits the data provided. The equation of the line in its simplest form is described as below y=mx +c
In the case of regression using a support vector machine, we do something similar but with a slight change. Here we define a small error value e (error = prediction - actual)
The value of e determines the width of the error tube (also called insensitive tube). The value of e determines the number of support vectors, and a smaller e value indicates a lower tolerance for error.
Thus, we try to find the line’s best fit in such a way that: (mx+c)-y ≤ e and y-(mx+c) ≤ e
The support vector regression model depends only on a subset of the training data points, as the cost function of the model ignores any training data close to the model prediction when the error is less than e.