 # Why the inverse of a sparse irreducible matrix is dense

The structural inverse of a matrix is defined to be the pattern that is the union of all patterns for A-1 generated by choosing different values for entries of A. A structural LU and solution vector x are defined similarly. In this section, all references to L, U, A-1, and x are to their structures in this sense.

We show that, if A is irreducible, L has at least one entry beneath the diagonal in each column (except the last), U has at least one entry to the right of the diagonal in each row (except the last), x is dense, and A-1 is dense.

We first show that if nodes i and j in the digraph (Section 1.2) of the matrix are joined by a path (i, k 1), (k, k2),..., (ks, j), where k, k2,..., ks are all less than i and j (called a legal path by Richard Karp), then (i, j) will be an entry in LU. To show this, suppose that km = min* k*. Just before elimination step km, there are entries in positions (km-1,km) and (km,km+1). Therefore, following it, there is an entry in position (km-1,km+1) so the digraph of the reduced submatrix contains the (legal) path (i, k1), (k1,k2), ..., (km-1, km+1),..., (ks,j). Continuing the argument similarly, we eventually find that (i, j) is an entry in LU.

Next we show that there is at least one entry to the right of the diagonal in every row of U (except the last) and at least one entry beneath the diagonal in every column of L (except the last). We prove this by showing that if there is a row k of U, k < n, with no entries to the right of the diagonal, then A is reducible. The proof for L is similar. Suppose there is a path in the digraph of A from node k to a node after k and let l be the first such node on the path. The path from k to l is a legal path, so (k, l) must be an entry in LU, contrary to our assumption. Therefore there can be no path in the digraph of A from k to a later node. Let S be the set consisting of node k and all nodes to which there is a path from k. The complement of S is not empty because it contains all the nodes after k. If we reorder the matrix so that the nodes of S precede those of its complement, we find a 2 x 2 block lower triangular form, that is A is reducible. This proves the statement in the first sentence of this paragraph.

Notice that some permutations of a reducible matrix may give factors L and U possessing the property of the last paragraph. For example, interchanging the columns of the matrix leads to dense matrices L and U. On the other hand, any permutation of an irreducible matrix is irreducible, so the property of the last paragraph holds for all permutations of irreducible matrices.

We next show that if L is a lower triangular factor of an irreducible matrix and b = 0, the solution of the equation has an entry in its last position. To prove this, suppose bk is the last nonzero of b. Since the corresponding component of the solution is found from the equation it is an entry in y. If k = n, this is our desired result. Otherwise, since L is a factor of an irreducible matrix, it has an entry , j > k; by replacing k by j in equation (15.6.3), we conclude that the solution has an entry in position j.

If j = n, this is our desired result. If not we continue the argument, finding successive entries in the solution until position n is reached.

If U is an upper triangular factor of an irreducible matrix and yn is an entry, the solution of the equation is dense. To prove this, we remark that the equation for xk is Clearly, xn is an entry in x. For к < n, suppose xk+1, ...,xn are all entries; since U is a factor of an irreducible matrix, it has an entry ukj, j > к, so we deduce from equation (15.6.5) that the solution has an entry in position к, too. Hence, by induction, x is dense.

Together, the last two paragraphs tell us that for any set of equations with an irreducible matrix A the solution vector x is dense if it is computed with Gaussian elimination and any cancellations are ignored. This is also true for A-1 if it is computed in the usual way by solving the equation by Gaussian elimination.

It might be thought that there could be some sparsity in the inverse that is not exposed by this treatment, but this is not the case. Duff, Erisman, Gear, and Reid (1988) show that for every position (i,j) there is a matrix with the given irreducible structure that has a nonzero in position (i,j) of its inverse.