The Mathematical Algorithms That Are Used for Establishing the Statistical Weights for the Inputs and the Links in the ...

Whenever a new input is added into the ANN system, especially those of the training datasets, the LAMSTAR NNs will carefully examine all of the storage weight vectors for each module (denoted as “i”), and compare those with the statistical weights that could be potentially assigned to the inputs of the datasets. From this close examination, the “Winning Neuron” (as discussed previously throughout this chapter) is then computed with the following mathematical formula:

Also, these statistical weights as just described can be further adjusted if need be, in order to gain as much optimization and reliability as possible. This is done with another specialized mathematical technique, and this is technically known as the “Hamming Distance Function” (denoted as “Dmax”), and it can be represented as follows:

Also, as mentioned previously in the last subsection, the LAMSTAR NNs contain many interconnections, or links, between the input layers and the output layers of the ANN system. Although these links can be considered “dynamic” in nature, they too need to be updated for optimization as well as reliability. Once again, this is done by assigning these various interconnections different statistical weight values, and they can be thus computed and assigned according to the following formulas:

Where:

Li,j/km = represents the links of the Winning Neuron (denoted as “I”) in the output module (denoted as “j”).

The statistical weights as described in the above equations can also help to regulate the flow input from the dataset in the ANN system so that only the needed processing and computational power is used, and not any more. In many applications that actually make use of the LAMSTAR NNs, the only interconnections or links that are considered for updating are those that reside in between the SOM layers and the outputs of the ANN system. But the interconnections, or links, between the various SOM modules do not get updated whatsoever.

Also as mentioned previously, of the key components of the LAMSTAR NNs are those of “Forgetting,” and “Inhibition.” In terms of the former, this, which is incorporated with what is known as the “Forgetting Factor,” denoted as “F,” can be reset at various, predetermined intervals. This can be denoted as к = sK, s = 0, 1,2, 3, etc., where К represents a predetermined numerical, constant value. This is mathematically represented as:

Where:

0 > F > 1 = the preset Forgetting factor.

It is important to note at this point that another mathematical algorithm can also be substituted for the above equation, and this is known as the “Forgetting Algorithm,” where the value of L(k) is reset at every к = sK, s = 0, 1, 2, 3, etc. This algorithm can be represented as follows:

Where:

Z = the highest numerical value to achieve “Ks < k,” so that “i” is started from scratch at the value of 0, and subsequently increasing in value at every iteration in the ANN system.

With regards to “Inhibition,” this must be ingrained and programmed into the ANN system before it can be executed in the production environment. With respect to the LAMSTAR NNs, it is typically included by pre-assigning the selected Neurons in the input layers.

An Overview of the Processor in LAMSTAR Neural Networks

As it was reviewed earlier in the previous subsections, LAMSTAR NNs make use of what is known as “Deep Learning.” With this extra functionality, it can compute the outputs by making use of a specialized processor in order for the ANN system to be used in much larger and complex types of applications. Also, in order to facilitate the processing power and computational speeds, the ANN system can avail itself to the concepts of parallel processing.

The processor of the LAMSTAR NNs is often found in the inputs of the SOM layer of the ANN system.

The Training Iterations versus the Operational Iterations

With the typical ANN system, one of its greatest advantages of it is that it can keep training nonstop on a 24/7/365 basis, as long as it is constantly being fed clean and robust datasets. But as it has been pointed out before in previous subsections, this not the case with LAMSTAR NNs. These can only operate in an iterative cycle mode. In other words, the LAMSTAR NNs can only run and operate in testing and operational runs, in a start-stop fashion.

But in order to further optimize the network performance of the ANN system, a number of test runs need to be implemented so that the LAMSTAR NN will be able fire off the Neurons so that the actual datasets can start being fed into it.

 
Source
< Prev   CONTENTS   Source   Next >