Machine Learning Predictive Models

Machine Learning (ML) approaches can be utilized as a regressor for prediction modeling. This section introduces the support vector machine and artificial neural networks and explains their prediction ability.

Support Vector Machine and AVM for Support Vector Regression (SVR)

In 1995, support vector machine (SVM) was developed to address problems of pattern recognition and classification, including facial identification and text grading [39]. Nevertheless, wide applications in other fields, such as feature rough-up, regression estimates, and TS predictions, were soon found.

By choosing any specific part of training data known as support vectors, SVM seeks to find a decision rule with little generalization. In this approach, after nonlinear mapping input space into a higher-dimensional space, an optimal separating hyper-plane is constructed. Hence, the input space is not the most effective parameter on the quality and complexity of the SVM result.

SVM's teaching is equivalent to the solution of a linear quadratic programming problem that is a major aspect of SVM. Therefore, the SVM solution is always special and globally optimal in contrast to other methods of preparation. One major demerit of SVM, however, is that the scale of the instruction, which raises the computational cost, is enormous.

Vapnik has derived a general algorithm to use SVM for regression. The optimal decision hyperplane of SVR can be expressed by

where Ns is the total number of support vectors, к is a kernel function, bopt is the optimal bias of the hyperplane, x is the observed dataset, and x, is the /th observed data and optimized Lagrange multipliers cx = (au a2,..., aN)T and Of, > 0.

The Kernel is a nonlinear mapping for mapping the input space dots into a sizeable dimensional space. In this high-dimensional space, the optimal separating hyper-plane is built. This method also solves the problem when a linear decision is not possible to separate training points.

Some of the accessible kernel functions are

  • • Linear kernel k(x, y) = xTy
  • • Polynomial kernel к (x, у) = (я + хту)'

Radial basis function (RBF) kernel k(x, y) = e 2°2

• Neural network kernel k(x, y) = tanh (яхту + b) where a and b are constants.

 
Source
< Prev   CONTENTS   Source   Next >