Random Forest

Random forest was proposed by Leo Breiman [38], and it is used for pattern classification and regression. A random forest is an ensemble learning algorithm consisting of decision trees as weak classifiers, hence the name of the random forest. The procedure of the learning steps for random forest is:

  • 1. Select m subsets from learning samples (Bootstrap sample).
  • 2. Generate m decision trees using each subset.
  • 3. Generate nodes until the number of the nodes reaches a specified number in the following way:

a. Select k explanatory variables from learning samples randomly.

b. Decide split function of the node using explanatory variable which has the best accuracy to classify learning samples and the threshold.

This procedure generates a set of decision trees that have low correlations with one another.

Organization of perceptron appeared in [41]

Fig. 2.9 Organization of perceptron appeared in [41]

Artificial Neural Network (ANN)

ANN is a generic name for models with the ability to solve problems by varying the coupling strength between artificial synapses in networks. The ANN often is capable of relatively high performance in multidimensional and nonlinearly separable problems. One of the origins of the research in ANNs was that of a single-layered perceptron algorithm (see Fig. 2.9) proposed by Rosenblatt in 1958

[41] , which has the capability of learning classifiers only for linearly separable data

[42] . The number of layers and of the synapses of ANNs increases to achieve the capability of learning nonlinear classifiers, as shown in the multilayered neural network known as the neo-cognitron [43] and in a convolution neural network (CNN) [44]. One can design all steps of image processing from image feature extraction to the classification simultaneously by employing a multiple-layered ANN. A serious difficulty was found, however, in training of the coupling strengths of the synapses. A framework of back propagation was invented for training in the 1980s [45, 46], but it did not have enough capability for appropriately varying the coupling strengths of the many synapses. Later, several important techniques for the training, for example, sparse coding [47] and layerwise pretraining [48], were invented, making it much easier to construct strong nonlinear classifiers with multilayered ANNs (Fig. 2.10).

< Prev   CONTENTS   Source   Next >