Robustness of System Identification Algorithms
Each of the previously described identification algorithms is dependent on some criteria to converge. A careful choice of the parameters is necessary to ensure convergence of the algorithm. In the following, a description of the conditions for convergence for each of these algorithms will be provided along with a comparison through an example of robustness, expected performance in terms of amount of residual error, convergence time, and computation complexity for these algorithms.
The LS Algorithm
Given the least-squares (LS) algorithm is a non-iterative algorithm, it has no need for parameter initialization. Therefore, its convergence is independent from such step. However, this algorithm involves the inversion of the autocorrelation matrix XT X, which has a size of L x L, where L is the number of the model coefficients. The number of multiplications required for this inversion is in the order of O(L3).
The computational complexity is not the in only challenge using the LS algorithm. The stability of implementation is also a major challenge. Indeed, it was shown that the autocorrelation matrix XT X is ill-conditioned resulting in significantly large ratio between the highest and lowest eigenvalues. As a result, this matrix is almost singular and its inverse cannot be computed with enough accuracy [36, 37]. It is important to mention that this ill-conditioning problem is intensified with large nonlinear orders of the model. Therefore, the modeling accuracy cannot keep improving indefinitely as the nonlinearity order of the model increases. A tradeoff between the matrix conditioning and the number of coefficients for the model has to be considered in order to achieve the lowest residual errors.
Finally, the LS algorithm, when not faced with an ill-conditioning problem, is expected to provide the optimal solution in the terms of least squares criterion without any additional residual errors and therefore is expected to result in the lowest possible J for a given model.