Support vector regression (SVR) is a statistical method that examines the linear relationship between two continuous variables.
In regression problems, we generally try to find a line that best fits the data provided. The equation of the line in its simplest form is described as below y=mx +c
In the case of regression using a support vector machine, we do something similar but with a slight change. Here we define a small error value e (error = prediction - actual)
The value of e determines the width of the error tube (also called insensitive tube). The value of e determines the number of support vectors, and a smaller e value indicates a lower tolerance for error.
Thus, we try to find the line’s best fit in such a way that: (mx+c)-y ≤ e and y-(mx+c) ≤ e
The support vector regression model depends only on a subset of the training data points, as the cost function of the model ignores any training data close to the model prediction when the error is less than e.
The current problem causes from the unscaled dataset of our practice. Normally several common class types contain the feature scaling function so that they make feature scaling automatically. However, the SVR class is not a commonly used class type so that we should make feature scaling by our codes.
File "<ipython-input-19-055f48d66ff7>", line 3
plt.scatter(X, y,color =‘green’)
^
SyntaxError: invalid character in identifier