# Categorical Harmony in Comparison with Other Principles

Here we have a look at relationships with Belnap's harmony and the so-called reflection principle and definitional equations by Sambin and his collaborators.

The categorical approach to harmony poses several questions to Belnap's notion of harmony. As we saw above, implication → in intuitionistic logic is right adjoint to conjunction ∧. Suppose that we have a logical system L with logical constants ∧ and ∨ only, which are specified as the right and left adjoints of diagonal Δ as in the above. And suppose we want to add implication → to L. Of course, this can naturally be done by requiring the right adjoint of ∧. Now, Freyd's adjoint functor theorem tells us that any right adjoint functor preserves limits (e.g., products), and any left

adjoint functor preserves colimits (e.g., coproducts). This is a striking characteristic of adjoint functors. In the present case, the theorem tells us that ∧ preserves ∨; in other words, ∧ distributes over ∨. Thus, defining implication according to categorical harmony is not conservative over the original system L, since the bi-directional rules for ∧ and ∨ only never imply the distributive law. Note that sequent calculus for ∧ and ∨ allows us to derive the distributive law without any use of implication; yet the bi-directional rules alone do not imply it.

Although proponents of Belnap's harmony would regard this as a strange (and perhaps unacceptable) feature, nevertheless, this sort of non-conservativity is necessary and natural from a category-theoretical point of view. Furthermore, conservativity may be contested in some way or other. One way would be to advocate categorical harmony against Belnap's on the ground of the Quinean holistic theory of meaning, which implies that the meaning of a single logical constant in a system, in principle, can only be determined by reference to the global relationships with all the other logical constants in the whole system. If the meaning of a logical constant depends on the whole system, then adding a new logical constant may well change the meaning of old ones. Non-conservativity on logical constants is arguably a consequence of a form of holism on meaning, even though it violates Belnap's harmony condition. Anyway, we may at least say that the principle of categorical harmony, or Lawvere's idea of logical constants as adjoints, is in sharp conflict with Belnap's notion of harmony, in terms of the conservativity issue.

Another distinctive characteristic of adjoint functors is that any of a right adjoint and a left adjoint of a functor is uniquely determined (up to isomorphism). By this very fact, we are justified to define a concept via an adjunction. This actually implies that Belnap's uniqueness condition automatically holds if we define a logical constant according to the principle of categorical harmony. Thus, uniqueness is not something postulated in the first place; rather, it is just a compelling consequence of categorical harmony. However, it is not very obvious whether this is really a good feature or not. As a matter of fact, for example, exponentials (!, ?) in linear logic do not enjoy the uniqueness property (as noted in Pitts ; it is essentially because there can be different (co)monad structures on a single category). At the same time, however, we could doubt that exponentials are genuine logical constants. Indeed, it is sometimes said that they were introduced by Girard himself not as proper logical constants but as a kind of device to analyse structural rules. The rôle of exponentials is to have control on resources in inference, and not to perform inference per se on their own. It would thus be a possible view that exponentials are a sort of “computational constants” discriminated from ordinary logical constants. This is an issue common to both categorical harmony and Belnap's harmony.

There are even more subtleties on uniqueness in categorical harmony, which involve a tension between cartesian and monoidal structures in category theory. When formulating the categorical procedure to introduce logical constants in the last section, it was remarked that we may replace the language of (plain) category theory with that of monoidal category theory if we want to treat substructural logics as well.

In such a case, we first have a monoidal product ⊗ in our primitive language, and then require, for example, a right adjoint of ⊗, which functions as multiplicative impli-

cation. Since any adjoint is unique, there appears to be no room for non-uniqueness. However, the starting point ⊗ may not be unique if it cannot be characterised as an adjoint functor, and you can indeed find many such cases in practice. The point is that, in general, monoidal structures can only be given from “outside” categories, i.e., the same one category can have different monoidal structures on it. If we have both ⊗ and the corresponding implication →, then ⊗ is a left adjoint of →. However, if we do not have implication, then ⊗ may not be characterised as an adjoint, and thus may not be unique. This is the only room for non-uniqueness in categorical harmony, since any other logical constant must be introduced as an adjoint in the first place.

From a proof-theoretic point of view, having a monoidal structure on a category amounts to having the comma “,” as a punctuation mark in the meta-language of sequent calculus. In sequent calculus, we are allowed to put sequents in parallel (otherwise we could not express quite some rules of inference), and at the same time, we are allowed to put formulae in parallel inside a sequent by means of commas. The former capacity corresponds to the categorical capacity to have cartesian products, and the latter corresponds to the capacity to have monoidal products. This seems relevant to the following question. Why can monoidal structures ⊗ be allowed in category theory in spite of the fact that in general they cannot be defined via universal mapping properties? To put it in terms of categorical harmony, why can monoidal structures ⊗ be allowed as primitive vocabularies to generate logical constants? (And why are others not allowed as primitive vocabularies?) This is a difficult question, and there would be different possible accounts of it. One answer is that there is no such reason, and ⊗ ought not to be accepted as primitive vocabularies in the principle of categorical harmony. Yet I would like to seek some conceptual reasons for permitting ⊗ as primitive vocabularies in the following.

For one thing, ⊗ is presumably grounded upon a sort of our epistemic capacity to put symbols in parallel (inside and outside sequents) as discussed above. The epistemic capacity may be so fundamental that it plays fundamental rôles in symbolic reasoning as well as many other cognitive practices; this will lead to a sort of epistemic account of admissibility of ⊗ in the principle of categorical harmony. Another “informational” account of it seems possible as well. There are three fundamental questions: What propositions hold? Why do they hold? How do they hold? The first one is about truth and falsity, the second one about proofs, and the last one about the mechanisms of proofs. An answer to the last question must presumably include an account of the way how resources or assumptions for inference are used in proofs, or how relevant inferential information is used in proofs. And ⊗ may be seen as a means to address that particular part of the third question. This is the informational account, which has some affinities with the view of linear logic as the logic of resources.

Yet another “physical” account may be came up with. In recent developments of categorical quantum mechanics by Abramsky and Coecke (see Abramsky  and references therein), the capacities to put things in parallel as well as in sequence play vital rôles in their so-called graphical calculus for quantum mechanics and computation, where parallel composition represents the composition of quantum systems (resp. processes), i.e., the tensor product of Hilbert spaces (resp. morphisms), which is crucial in quantum phenomena involving entanglement, such as the EinsteinPodolsky-Rosen paradox and the violation of the Bell inequality. In general, ⊗ lacks diagonals and projections, unlike cartesian ×, and this corresponds to the No-Cloning and No-Deleting theorems in quantum computation stating that quantum information can neither be copied nor deleted (note that diagonals Δ : XXX copy information X , and projections p : XYX delete information Y ). On the other hand, classical information can be copied and deleted as you like. So, the monoidal feature of ⊗ witnesses a crucial border between classical and quantum information. To account for such quantum features of the microscopic world, we do need ⊗ in the logic of quantum mechanics, and this would justify to add ⊗ to primitive vocabularies.

The physical account seems relevant to the well-known question “Is logic empirical?”, which was originally posed in the context of quantum logic, and has been discussed by Quine, Putnam, Dummett, and actually Kripke (see Stairs ). The need of multiplicative ⊗ in the “true” logic of quantum mechanics is quite a recent issue which has not been addressed in the philosophy community yet, and this may have some consequences to both the traditional question “Is logic empirical?” and the present question “Why are substructural logical constants are so special?”, as partly argued above. A more detailed analysis of these issues will be given somewhere else.

Sambin et al.  present a novel method to introduce logical constants by what they call the reflection principle and definitional equalities, some of which are as follows:

ϕψ ξ iff ϕ ξ and ψ ξ .

ϕ, ψ ξ iff ϕψ ξ .

Γ ϕψ iff Γ (ϕ ψ).

As these cases show, definitional equalities are quite similar to adjointness conditions in categorical harmony (when they are formulated as bi-directional rules), even though Sambin et al. do not mention category theory at all. Especially, in the case of additive connectives, their definitional equivalences are exactly the same as the bi-directional rules induced by the corresponding adjunctions. There are crucial differences, however. Among them, the following fact should be emphasised:

• Definitional equalities do not always imply adjointness, partly due to what they call the “visibility” condition, which requires us to restrict context formulae in sequent-style rules of inference (categorically, this amounts to restricting so-called Frobenius reciprocity conditions).

– For example, implication is not necessarily a right adjoint of conjunction in the system of “basic logic” derived via their guiding principles.

This deviation from adjointness actually seems to be inevitable for Sambin et al., because they want to include Birkhoff-von Neumann's quantum logic with some concept of implication as a structural extension of their basic logic; however, quantum implication (if any) cannot be a right adjoint of conjunction, due to the nondistributive nature of it, which is essential in Birkhoff-von Neumann's quantum logic to account for superposition states in quantum systems.

In contrast, categorical harmony cannot allow for any sort of non-adjoint implication. Is this a good feature or not? It depends on whether such implication counts as genuine implication, and so on our very conception of logical constants. The categorical logician's answer would be no: for example, Abramsky  asserts that Birkhoff-von Neumann's quantum logic is considered to be “non-logic” because it does not have any adequate concept of implication (on the other hand, categorical quantum logic is said to be “hyper-logic”).

Finally, it should be noted that Schroeder-Heister  compares the framework of Sambin et al.  with his framework of definitional reflection, and that Bonnay and Simmenauer  proposes to exploit the idea of Sambin et al.  in order to remedy the aforementioned defect (the “blonk” problem) of Došen's double-line approach in [8, 9].