# Second-Order Recurrent Networks

This model was proposed by Giles * etal.* [16]. It incorporates a single layer of PEs. It was developed to learn grammars. The PEs in this model are referred to as second- order PEs since the activation of the next state is computed as the multiplication of the previous state with the input signal. The output of each PE is fed back via a time delay unit and multiplied by each input signal. If the network has

*feedback states and*

**N***input signals,*

**M***multipliers are used to multiply every single feedback state by every single input signal [16]. Thus, the activation value*

**N x M***) of PE*

**yj (t***can be computed as follows:*

**j**

where the weight * Wj_{il}* is applied to the multiplication of the activation value

*1) and the input*

**y( —***1). Figure 15 shows the diagram of a second-order recurrent network.*

**x**_{l}(t —**Fig. 15 ****The block diagram of the second-order recurrent network model**

# Nonlinear Autoregressive Model with Exogenous Inputs (NARX) Recurrent Networks

In this class of neural networks, the memory elements are incorporated in the input and output layers. The topology of NARX networks is similar to that of the finite memory machines, and this has made them good representative of finite state machines. The model can include input, hidden and output layers. The input to the network is fed via a series of delay units. The output is also fed back to the hidden layer via delay units [6]. The model has been successful in time series and control applications. The architecture of a NARX network with three hidden units is shown in Figure 16. The mathematical description of the model can be given as follows:

where * x(t*) is the source input;

*) is the output of the network;*

**y(t***and*

**N***are constants; and*

**M***and*

**a***are constants.*

**b**_{i}In this section, we reviewed the possible topologies of the RNN architectures and the common proposed architectures. The RNN architectures mentioned above have been proposed to tackle different applications. Some models have been proposed for grammatical inference and other models have been proposed for identification and control of dynamic systems. In addition, there is a coordination between the network architecture and the learning algorithm used for training.

**Fig. 16 ****The architecture of a NARX network**