 # Tensor Products

The multiplicative structure of A, on the other hand, is quite far from componentwise tensor product of vector spaces, as the latter would make т ® т = т (because C ® C = C). Our goal in the rest of this paper is to determine the multiplicative structure in terms of pairs of vector spaces.

The equation тn = fn-1 ? 1 ® fn ? т mentioned above already determines that structure as far as the objects are concerned, but there remains much to be said about the morphisms.

A morphism from one pair of vector spaces (У1, Vt) to another such pair (W1, Wr) is a pair of linear transformations (m 1 : V1 ^ W1, m т : Vt ^ Wr). We can think of it as a pair of matrices, provided we fix bases for all the vector spaces involved here.

The choice of bases involves considerable arbitrariness, but there is a (somewhat) helpful guiding principle, namely that, if we have already chosen bases for two vector spaces, then the union of those bases serves naturally as a basis for the direct sum of those vector spaces. Some caution is needed, though, because the same vector space can arise as a direct sum in several ways and can thus have several equally natural bases. Indeed, much of our work below will be finding the transformations that relate such bases.

The guiding principle tells us nothing about choosing bases for the one-dimensional spaces V1 and Vx in the pairs 1 = (V1,0) and т = (0, Vr). There isn’t even any non-zero morphism between these simple objects to suggest a correlation between the choice of bases. Nor do we get canonical bases here by evaluating compound expressions that fuse to т or to 1 or to a sum of these. So we might as well identify these one-dimensional spaces with C and use the number 1 as the basis vector in both of them.

Then т ® т = 1 ® т = (C, C) already has a basis for each of the two vector spaces. Let us turn to the triple product As a pair of vector spaces, it is isomorphic to (C, C2), but we have some additional information about it, namely that it was obtained as the sum of т ® 1 = т and т ® т = 1 ® т. Our guiding principle thus suggests choosing a basis in C2 that respects this sum decomposition. That is, one of the basis vectors in C2 should come from the first т and the other should come from the second summand, 1 ® т.

Consider, however, the analogous computation with the other way of parenthesizing the triple product: It also leads to the pair of vector spaces (C, C2), and it also provides a suggestion for a basis of C2. There is, however, no guarantee that this suggestion agrees with the one in the preceding paragraph. We shall see below that the two suggestions are actually guaranteed to disagree. We have two bases for C2, and there will be a non-trivial matrix transforming the one into the other. We shall find that this matrix is almost uniquely determined.

There could, a priori, have also been two different natural bases for the first component C in т ®3, although we shall see that, in this particular situation, they coincide.

These basis transformation matrices, relating the bases that arise from т ® ® т) and from ® т) ® т,amountto the associativity isomorphism ат,т,т in the definition of the monoidal category A.

Recall from Sect. 8.4 that all the associativity isomorphisms of A are determined by those with simple objects as subscripts. One of these is the ax,x,x mentioned just above; the others involve one or more 1’s in the subscript. Fortunately, all those others are identity maps, thanks to the identification of 1 ® X and X ® 1 with X. So the entire associativity structure of A comes down to two matrices, a 2 x 2 matrix relating the two bases for C2 and a number (a 1 x 1 matrix) relating the two bases for C. These matrices are subject to the constraint given by the pentagon condition (Fig. 8.1). Below, we shall calculate that constraint explicitly. It will almost uniquely determine a.

We shall also calculate the constraint imposed by the hexagon condition on the braiding isomorphisms a (Fig.8.2). Again, the only component that needs to be computed is ат,т. The components where at least one subscript is 1 are trivial, and the components with non-simple objects as subscripts reduce, by distributivity, to ones with simple subscripts.