MATLAB® Neural Network Toolbox

MATLAB Neural Network Toolbox consists of many functions and utilities. It discusses how to use these functions for the creation and training of networks. One can also simulate and visualize neural networks in respect of verification and validation. Once the data for network training is available for analysis, utilities like interpolation, statistical analysis, equation solvers, and optimization routines can be used to plot the training error functions, monitor the change in weight matrix, and obtain the real-time network outputs to verify their accuracy. Transfer functions in the toolbox include hard limit, symmetric hard limit, log sigmoid, linear, saturated linear, and tan sigmoid functions. There are two important algorithms in neural networks:

  • • Levenberg-Marquardt.
  • • Bayesian Regularization.

Experimental Design

The aim of this work is whether ANN can be used as tool for measuring the Integration Efforts of Component Based Systems. In this section, Integration Efforts are a number of factors mentioned above. Integration Efforts may be measured using these factors. These factors are considered as input min-max normalization is used.

General architecture of Artificial Neural Network

FIGURE 5.1 General architecture of Artificial Neural Network.

Min-Max normalization is a linear transformation on the original data. The original input range is transferred into a new one (generally, 0-1).

Let minA = minimum value for attribute A.

maxA = maximum value for attribute A.

let value of A = p, it is mapped into a new value p’.

The input factors are classified as low, medium, and high categories. Rules have been designed based on various combinations of input to predict the output. In total there will be 33 rules.

The network is trained using the trainlm function of the feed forward propagation algorithm. The Windrow-Hoff learning rule may be generalized using backpropaga- tion. For this purpose, the transfer function used to be non-linear. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined by you. Networks with biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities.

Accurately trained back propagation networks provide adequate answers for the inputs they have never processed. In general, the input of the vectors used in the new' input training is the same as the output of the new' input. This generalization makes it possible to train the network in a representative group of asset pairs and get good results without training the network on all possible input/output pairs.

Trainlm is a network training function that updates weight and bias values according to Levenberg-Marquardt optimization. Trainlm is often the fastest propagation algorithm in the toolbox and is highly recommended as a first-choice supervised algorithm, although it does require more memory than other algorithms.

Here, Tansig (a linear transfer function) is used as shown in Figure 5.2. It calculates a layer’s output from its input.

At first, a training set is made up of 120 examples. The training examples have 35, 35, and 45 cases of, low', medium, and high categories respectively. The training exemplars w'ere chosen randomly. The initial set was again extended to 135 examples having 40, 45, and 40 cases respectively, so the effect of the increment may be noticed for network performance. This small increment in training examples from

Tansig Function

FIGURE 5.2 Tansig Function.

120 to 135 as compared to 243 possible inputs did not contribute significantly to performance.

ANN considers this task as the pattern recognition problem in which patterns are aligned to give number of classes. Through a training session neural networks recognize to pattern of given data. In a training session a network is supplied with a set of inputs and their category. After that a new pattern that has not been supplied earlier is passed through the network, but it is from the same pattern class by which the network was trained. The network is able to identify the class of that pattern because of the information it has extracted from the training data.

 
Source
< Prev   CONTENTS   Source   Next >