In the LMS algorithm described in the previous section, the parameter g is a constant that represents the adaptation step size. For small values of g, the convergence is slow and several iterations are needed for the algorithm to converge. If g increases, the convergence becomes faster but the risk of divergence can increase. It has been shown that a practical value of g that guarantees convergence of the LMS algorithm has to satisfy:

where L is the number of the adaptive filter coefficients and a^{2} is the variance of the input signal to the adaptive filter.

The LMS algorithm is very simple. It requires only 2L + 1 multiplications and 2L additions per iteration. However, it requires a larger number of iterations than the RLS algorithm to converge and often results in higher residual errors. In fact, the residual error in LMS algorithm can be shown to be equal to [33]:

From Equation 8.50, one can deduce that the LMS algorithm performance depends on the signal statistics.