Classification Methods

A number of classification methods have been applied to modelling the relevance feedback as a classification problem. Here, we introduce the most widely adopted methods in CBIR, which are artificial neural networks (ANNs), support vector machines (SVMs), and ensemble learning.

Artificial Neural Networks

Inspired by the parallel structure of biological neural systems, artificial neural networks (ANNs) are composed of layers of simple processing units, i.e., artificial neurons, as shown in Fig. 3. Generally, a multi-layer neural network involves an input layer, a hidden layer, and an output layer, where the neurons reflect the relationships between the input {Xi} and the output y and they can generate

nonlinear output to learn the nonlineaity from the training data. As seen in Fig. 4, there is an example neural network consisting of an input layer with five neurons, a hidden layer with three neurons, and an output layer with two neurons, where each input Xi is weighted by wt, and then mapped and summed via a function

into the output y = F(z) with z = wiXi . Herein, F can be a sigmoid function, a radial basis function (RBF), or etc.

An artificial neuron

Fig. 3 An artificial neuron

For a classification problem, there are usually m output corresponding to m categories. In the learning process of an ANN, series of training examples are presented to the network and the weight defined as the strength connecting between an input and a neuron is automatically adjusted to minimize the error - the difference between the output and the target value. The complexity of the architecture of an ANN depends on some factors, e.g., the number of training examples, the number of hidden layers, and the number of neurons in each hidden layer. Typically, a single hidden layer is enough to model complex data, that is, a three-layer network with sufficient number of neurons in the hidden layer can deal with arbitrary high-dimensional data. More hidden layers rarely bring an improvement in performance while the model gets slower and more complicated., Using more hidden layers may cause a problem of converging to a local minimum and it requires the use of the random initialization methods to achieve better global optimization. Nevertheless, its advantages on performance have been shown in some certain applications, such as cascade correlation (Fahlman and Lebiere 1990) and ZIP code recognition (LeCun et al. 1989). Although ANNs require long training times, the evaluation stage is very fast, which is very important for on-line applications. Moreover, ANNs are quite robust to noisy training examples (training data with errors) or complex sensor data from cameras because of their good generalization ability on unseen data after the learning stage. ANNs have been utilized in various applications, such as robot control, face recognition, etc. If long training time is acceptable, it would be a good choice to apply them in a classification task.

An example of ANNs

Fig. 4 An example of ANNs

 
Source
< Prev   CONTENTS   Source   Next >