Now that we have a general notation system for the basis vectors in parenthesized ^-products, we turn to the specific cases involved in associativity and the pentagon condition.
The unique “interesting” component of associativity, ат т т, which we sometimes abbreviate as simply a, is an isomorphism from (т ® т) ® т to т ® (т ® t), both of which are, as pairs of vector spaces, a 1-dimensional V and a 2-dimensional Vt . The first parenthesization gives a basis vector
and two basis vectors
The second parenthesization similarly gives a basis vector and two basis vectors
Our task is to compute the transformation a between these bases. This a has two components, the first relating two bases of the one-dimensional space V and the second relating two bases of the two-dimensional space Vt . These are given, respectively, by a non-zero number p such that
and a non-singular matrix 0 ^ such that
Here “non-zero” for p and “non-singular” for the matrix embody the requirement that a is an isomorphism.
We shall now investigate the constraints imposed on p, q, r, s, t by the pentagon condition.
That condition involves the ^-product of four т’s, parenthesized in five ways, and we shall need to consider the natural bases for all five parenthesizations. Since т&4 = (C2, C3), each parenthesization will give two vectors as a basis for the 1 component and three as a basis for the т component. We begin by considering the т components, whose bases are displayed below. (There is no significance to the chosen ordering of the five bases, nor the ordering of the three vectors within each basis.)
Each row in this picture is a basis for the 3-dimensional V; specifically, it is the basis arising from the same parenthesization of т & т & т & т as the parenthesization in our notation.
When writing transformation matrices between these bases, we must regard each basis as given in a specific order, because rows of a matrix come in an order. We (arbitrarily) choose the orders in which the bases are displayed above.
The five isomorphisms that appear in the pentagon condition amount to five transformations between these bases. Let us consider these one at a time, beginning with the one connecting the first two bases in the table. Here we are dealing with the isomorphism
The first subscript of this a, namely т ® т, can be decomposed as the sum 1 ® т, and the naturality of a then implies that ат is the direct sum of а1тт and аттт .The
first of these two summands is the identity, like all associativity isomorphisms where
one of the three factors is 1. The second summand is given by our matrix ^ ^. As
a result, we find that the transformation атconnecting the first two bases in our list is (taking into account the order in which the basis vectors are listed)
In this matrix, the 1 in the (2,1) position and the two zeros in its row arise from the fact that the identity map а1т т sends the second vector in our first basis to the first vector in the second basis. Had we listed (((т ? т) ? т) ? т) first rather than second in
1 т т
our first basis, the matrix for атm,т,т would have been a block diagonal matrix with 1 in the upper left corner.
An exactly analogous computation gives the isomorphism between the second and the last bases in our list:
Multipying these two matrices, we get the transformation from the first basis (parenthesized to the left) to the last (parenthesized to the right) that corresponds to the “short” side of the pentagon (two morphisms, across the top of the diagram). This product is
Turning to the long side of the pentagon (three morphisms), we find that the middle one, corresponding to rows 3 and 4 in our list of bases and to the bottom of the diagram, is quite analogous to the two that we have already computed. It is
The remaining two isomorphisms for the long side of the pentagon (the vertical arrows in the diagram) are a bit different, as they involve a’s on three of the four factors and an identity map on the remaining factor. Let us consider аттт ® 1т, which connects the first basis in our list to the third. In effect, this ignores the rightmost factor and acts like a on the first three factors. In other words, it is given by the same matrix as the transformation from the basis
to the basis
Notice that, in each of these bases the first element is in the V1 component, so that component of a, namely p, enters the picture. Indeed, the matrix connecting these bases is
Similarly, the remaining isomorphism on the long side of the pentagon is also
Multiplying the three matrices for the long side of the pentagon, and equating, as the pentagon condition requires, the resulting product to the product that we obtained for the short side of the pentagon, we have
This is the VT part of the pentagon condition. Before turning to the V1 part, let us extract as much information as possible from the matrix equation that we have just derived.
Suppose, toward a contradiction, that p = 1. Then the (1,3) and (3,1) components of our matrix equation give rt = st = 0, so either r = s = 0or t = 0. If r = s = 0, then the (1,2) component of the matrix equation gives that q = 0 also, but this
contradicts the fact that 0 ^ is non-singular. There remains the case that t = 0.
Then the (2,2) component says q = 0, the (2,3) component says r = 0, and we again
contradict the non-singularity of ^^ -So we have contradictions in all cases if
P = 1
So p = 1. Now the (1,1) entry of the matrix equation gives q = rs. Substituting that into the (2,2) component, we get q(q + t) = 0, so either q = 0 or q = —t. The first of these options leads, via the (1,2) entry, to rs = 0 and thus to a contradiction to non-singularity, as before. Therefore q = —t.
From the (2,3) and (3,2) entries, we get that (q + 12)r = r and (q + t2)s = s. We cannot have both r = 0 and s = 0, as that would give q = 0 in the (1,2) entry and contradict non-singularity. So we must have q + t2 = 1. In view of q = —t, this means q2 + q — 1 = 0 and therefore
This evaluation of q and t, together with the earlier results
satisfy, as one easily checks, the entire matrix equation above. The least trivial item to check is the (3,3) entry, rs + t3 = t2, which, in view of the equations above, becomes q — q3 = q2, i.e., 0 = q(q2 + q — 1), and this is true because q was obtained as a solution of q2 + q — 1 = 0.
All of the preceding calculation was based on the VT component of т ®4; we still have the V component of the pentagon equation to work out. Again, we have a list of five bases, now for a 2-dimensional space, as follows.
Computations analogous to (but shorter than) the earlier ones give, for the short side of the pentagon,
So the product for the short side is simply ^ ^°2^ • For the long side, we get and
Equating the product of the long side and the product of the short side, we get
This matrix equation is automatically satisfied because of the equations that we had already derived from the VT component of the pentagon condition. So there is no new information in the V1 component.
We can, however, get some additional information if we impose the requirement that the associativity isomorphisms be unitary transformations. This amounts to requiring the vector spaces of morphisms Hom(X, Y) to be Hilbert spaces and requiring our natural bases for them to be orthonormal.
Unitarity tells us nothing new about p, since we already know p = 1, but unitarity
of ^ ^ gives the equations
where bars denote complex conjugation and where we used the fact that q is real. So s = r and, since rs = q, we get first that q has to be positive,
and second that
for some real в. Thus, we finally have, under the assumption of unitarity,
with q = and в arbitrary. The presence of в here is an artifact of our choice of bases. If we modified the final vector in each of our bases, ((t ? t) ? t) in the domain
of aTTT and (t ? (t ? t)) in the codomain, by a phase factor e-в, then, with respect
to the new bases, we would have
-  We have chosen to regard V and Vt as each being a single space, independent of the parenthesization. The different parenthesizations give (possibly) different bases for these spaces. An alternativeview is that each parenthesization gives its own V and Vt , isomorphic to C and C2 respectively,with their standard bases, while a gives an isomorphism between the two V’s and an isomorphismbetween the two Vt’s. The two viewpoints are easily intertranslatable and the computations thatfollow would be the same in either picture.