Neural Networks Architecture for Modeling of Complex Dynamic Systems

To account for the presence of memory effects, dynamic neural structures have been proposed in the literature. These dynamic NNs structures can be sub-divided into two categories according to the nature of their architecture:

  • 1. NNs architectures without feedback such as the CTDNNs, and RVFTDNNs.
  • 2. NNs architectures with feedback such as CTDRNNs and RVTDRNNs.

These major dynamic models are discussed in the following subsections.

Complex Time-Delay Recurrent Neural Network (CTDRNN)

One of the most popular NN models is the CTDRNN model, which utilizes feedforward and feedback signal propagating schemes [5, 14]. An illustration of a typical feedback CTDRNN is shown in Figure 7.7.

In this architecture, the input signal is fed to the input layer through a set tapped delay lines (TDLs) containing p branches. A tapped delayed feedback of the output is also fed to the input layer. The feedback path TDLs are made of q branches. Both input sets, including the (p + 1) delayed samples associated with the input signal x(k) and the q delayed samples of the output signal y(k), are fed to the first layer’s N neurons using synaptic weights and biases. An activation function is applied and the outputs of each neuron are again scaled with complex weights. Finally the weighted outputs are added together to obtain the final output.

Block diagram of a two-layer complex time-delay recurrent neural network (CTDRNN)

Figure 7.7 Block diagram of a two-layer complex time-delay recurrent neural network (CTDRNN)

CTDRNNs use single-input single-output (SISO) complex architecture and hence suffer from cumbersome calculations and divergence when training the network. The dynamics of the system and memory effects are considered by taking into account the previous input and output samples, x(k - 1) through x(k - p) and y(k - 1) to y(k - q), respectively; where p and q represent the memory depth of the system. The input-output relationship of the CTDRNN model is given as:

where the input vector of the NN at instant k is:

Thus the network can be seen as having (p + 1 + q) complex inputs. Furthermore, the addition of the feedback delay between the inputs and outputs of the network increases the computational complexity and, often, negatively impacts the training and convergence of the network.

 
Source
< Prev   CONTENTS   Source   Next >