Connectionist Network Topologies
While the general FCRN described in the previous subsection is often used, many other RNNs are structured in layers. A RNN includes an input layer, output layer and typically one or more hidden layers. Each layer consists of a set of PEs. The feedback connections, which are specific to RNNs, can exist within or between any of the network layers. Typically, the inputs to the PE, in a RNN, are from other PEs in a preceding layer and delayed feedback from the PE itself or from other PEs in the same layer or in a successive layer. The sum of the inputs is presented as an activation to a nonlinear function to produce the activation value of the PE.
In RNNs, the topology of the feedforward connections is similar to MLPs. However, the topology of feedback connections, which is limited to RNNs, can be classified into locally recurrent, non-local recurrent and globally recurrent connections. In locally recurrent connections, a feedback connection originates from the output of a PE and feeds back the PE itself. In non-local recurrent connections, a feedback connection links the output of a PE to the input of another PE in the same layer.

Fig. 5 The block diagram of the RNN architecture in Figure 4

Fig. 6 The unrolled architecture
In globally recurrent connections, the feedback connection is between two PEs in different layers. If we extend this terminology to feedforward connections, all MLPs are considered as global feedforward networks. The non-local recurrent connection class is a special case of the globally recurrent connection class. Based on the feedback topologies, the architecture of RNNs can take different forms as follows:
In this class of recurrent networks, recurrent connections can occur in a hidden layer or the output layer. All feedback connections are within the intra PE level. There are no feedback connections among different PEs [51]. When the feedback connection is in the first PE layer of the network, the activation value of PE j is computed as follows:
where Wjj is the intensity factor at the local feedback connection of PE j, the index i sums over the inputs, and f (?) is a nonlinear function, usually a sigmoid function as denoted in Equation 5. For subsequent layers, Equation 9 is changed to:
where yi(t) are the activation values of the PEs in the preceding PE layer.
There are three different models of LRGF networks depending on the localization of the feedback.
Local Activation Feedback — In this model, the feedback can be a delayed version of the activation of the PE. The local activation feedback model was studied by [12]. This model can be described by the following equations:
where pj (t) is the activation at the time step t, t' is a summation index over the number of delays in the system, the index i sums over the system inputs, and wtjj is the weight of activation feedback of pj (t -1'). Figure 7 illustrates the architecture of this model.

Fig. 7 A PE with local activation feedback
Local Output Feedback — In this model, the feedback is a delayed version of the activation output of the PE. This model was introduced by Gori et al. [19]. This feedback model can be illustrated as in Figure 8a, and its mathematical formulation can be given as follows:
Their model can be generalized by taking the feedback after a series of delay units and the feedback is fed back to the input of the PE as illustrated in Figure 8b. The mathematical formulation can be given as follows:
where wjj is the intensity factor of the output feedback at time delay z—, and the index i sums over the input units. From Equation 14, it can be noticed that the output of the PE is filtered by a finite impulse response (FIR) filter.

Fig. 8 A PE with local output feedback. a) With one delay unit. b) With a series of delay units.
Local Synapse Feedback — In this model, each synapse may include a feedback structure, and all feedback synapses are summed to produce the activation of the PE. Local activation feedback model is a special case of the local synapse feedback model since each synapse represents an individual local activation feedback structure. The local synapse feedback model represents FIR filter or infinite impulse response (IIR) filter [3]. A network of this model is called a FIR MLP or IIR MLP when the network incorporates FIR synapses or IIR synapses respectively since the globally feedforward nature of this class of networks makes it identical to MLP networks [3, 32]. Complex structures can be designed to incorporate combination of both FIR synapses and IIR synapses [3].
In this model, a linear transfer function with poles and zeros is introduced with each synapse instead of a constant synapse weight. Figure 9 illustrates a PE architecture of this model. The mathematical description of the PE can be formulated as follows:
where Gi(z x) is a linear transfer function, and bl (l = 0,1,2, •••, q) and al (l = 0,1,2, •••, r) are its zeros’ and poles’ coefficients respectively.

Fig. 9 A PE with local synapse feedback