Support Vector Regression In Machine Learning

Unlocking a New World with Support Vector Regression Algorithm
Support vector machines (SVM) are popularly and widely used for classification problems in machine learning. I have often relied on this not only in machine learning projects, but also when I want a quick result in a hackathon.

But SVM for regression analysis? I hadn’t even considered the possibility for a while!! And even now, when i mention “Support vector regression” versus machine learning beginners, I often have a puzzled expression. I understand: most courses and experts don’t even mention supporting vector regression (SVR) as a machine learning algorithm.

But SVR has its uses, as you will see in this tutorial. First we will quickly understand what SVM is, before we dive into the world of support vector regression and how to implement it in Python.

Note: You can learn about support vector machines and regression problems in course format here (It’s free!):

This is what we will cover in this support vector regression tutorial:
* What is a support vector machine (SVM)?
* Support vector machine algorithm hyperparameters
* Introduction to Support Vector Regression (SVR)
* Implementing Support Vector Regression in Python

What is a support vector machine (SVM)?
Then, What exactly is Support Vector Machine (SVM)? We will start by understanding SVM in simple terms. Let’s say we have a diagram of two label classes as shown in the following figure:

Can you decide what the parting line will be? This may have occurred to you:

The line separates the classes quite a bit. This is what SVM essentially does: simple class separation. Now, What is the data?

Here, we do not have a simple line that separates these two classes. So we will expand our dimension and introduce a new dimension along the z axis. Now we can separate these two classes:

When we transform this line back to the original plane, maps to circular boundary as i have shown here:

This is exactly what SVM does!! Try to find a line / hyperplane (in a multidimensional space) that separates these two classes. Then classify the new point based on whether it is on the positive or negative side of the hyperplane according to the classes to predict.

Support vector machine algorithm hyperparameters (SVM)
There are some important SVM parameters that you should know before continuing:

* Core: A kernel helps us find a hyperplane in higher dimensional space without increasing computational cost. As usual, the computational cost will increase if the dimension of the data increases. This increase in dimension is necessary when we cannot find a separation hyperplane in a certain dimension and we must move in a higher dimension:

* Hyperplane: This is basically a dividing line between two data classes in SVM. But in support vector regression, this is the line that will be used to predict continuous output
* Decision limit: A decision boundary can be thought of as a line of demarcation (to simplify) on one side of which are the positive examples and on the other side are the negative examples. In this same line, examples can be classified as positive or negative. This same SVM concept will also apply in support vector regression

To understand SVM from scratch, I recommend this tutorial: Understand the Support Vector Machine algorithm (SVM) from examples.

Introduction to Support Vector Regression (SVR)

Support vector regression (SVR) uses the same principle as SVM, but for regression problems. Let’s take a few minutes to understand the idea behind SVR.

The idea behind support vector regression
The regression problem is to find a function that approximates the mapping of an input domain to real numbers on the basis of a training sample. So now let’s dig deeper and understand how RVS really works..

Consider these two red lines as the decision limit and the green line as the hyperplane. Our objetive, when we are moving forward with SVR, it is basically considering the points that are within the decision limit line. Our best line of fit is the hyperplane which has a maximum number of points.

The first thing we will understand is what is the decision limit (The red danger line up!). Consider that these lines are at any distance, Let’s say “a”, of the hyperplane. Then, these are the lines that we draw at the distance ‘+ a’ already’ of the hyperplane. This’ a’ in the text it is basically known as epsilon.

Assuming that the hyperplane equation is the following:

Y = wx+b (equation of hyperplane)

Then the decision limit equations become:

wx+b= +a
wx+b= -a

Therefore, any hyperplane that satisfies our RVS should satisfy:

-a

> Our main objective here is to decide a decision limit at a distance ‘to’ of the original hyperplane, so that the data points closest to the hyperplane or the support vectors are within that boundary line.

Therefore, we are going to take only those points that are within the decision limit and have the lowest error rate, or are within the Tolerance Margin. This gives us a better fit model..

Support vector regression implementation (SVR) and Python
Time to put on our coding hats!! In this section, we will understand the use of support vector regression with the help of a data set. Here, we have to predict the salary of an employee given some independent variables. A classic HR analysis project!

Paso 1: import the libraries

Paso 2: read data set

Paso 3: Function scaling

A real-world data set contains characteristics that vary in magnitudes, units and range. I would suggest doing normalization when the scale of a feature is irrelevant or misleading.

Feature Scaling basically helps to normalize the data within a particular range. Normally, various types of common classes contain feature scaling function to automatically scale features. But nevertheless, the SVR class is not a commonly used class type, so we should do feature scaling with Python.

Paso 4: fit SVR to data set

The kernel is the most important feature. There are many types of cores: linear, gaussianos, etc. Each is used according to the data set. For more information on this, read this: Support Vector Machine (SVM) in Python and R

Paso 5. Predict a new outcome

Then, the prediction for y_pred (6, 5) it will be 170,370.

Paso 6. Viewing SVR Results (for higher resolution and smoother curve)

This is what we get as output: the line of best fit that has a maximum number of points. Pretty accurate!

Final notes
We can think of support vector regression as SVM’s counterpart to regression problems. SVR recognizes the presence of non-linearity in the data and provides a competent prediction model.

I’d love to hear your thoughts and ideas on using SVR for regression analysis.. Connect with me in the comment section below and let’s come up with!

Subscribe to our Newsletter

We will not send you SPAM mail. We hate it as much as you.