# WHERE DID THE INTERACTIONS GO?

An explosion was averted. Every hare and every lynx has manifold interactions with its environment; changes in the hare and lynx populations are nothing but the aggregate consequences of these interactions; and aggregating the interactions looks to be the sort of intractable task that threatens to sink the sciences of complex systems— yet EPA proceeds without a hitch. What happened to the interactions ? How were they, in the end, so easily aggregated? By answering this question in part, I will show you that the ontology of EPA does not conform to the model of the spatiotempo- rally organized layer cake.

Explaining the complexity explosion in wedding cake models above, I attributed it to two properties of the relations between the parts of those models: sensitivity and combinatorial complexity.

An inter-part relation is sensitive if small changes in the state of a part make for a difference (perhaps slight) in the relation. It gives rise to combinatorial complexity if the number of relations the model must keep track of (when aggregating) increases with the number of parts.

In a Newtonian model, the relevant relations are the connections that determine the force exerted by a spatiotemporal part—the forces that must be aggregated, that is, to determine the behavior of the system as a whole. In an EPA model, they are the relations that determine enion probabilities such as the chance of hare death—the probabilities that must be aggregated to determine the behavior of the system as a whole.

Unlike the Newtonian relations, the EPA relations are not sensitive: enion probabilities are not affected by small changes in the state of the relevant enion, such as a shift in position. Indeed, by the independence requirements stated above, they depend on almost nothing about any enion—the probability of a hare’s death is not affected by its position or by what happens to any other hare or lynx.

Nor are the EPA relations combinatorially complex: the enion probability analyst must keep track of one set of probabilities per enion type, and that is all. In the lynx/ hare system that amounts to two sets, one for lynxes and one for hares, regardless of the population of each. A system with many lynxes is consequently no more difficult to model than a system with a few. Indeed, large populations make things simpler, by making it more likely that actual behavior will correspond to statistically expected behavior.

These negatives—the lack of sensitivity and of combinatorial complexity— go some way toward explaining why EPA models do not suffer from a complexity explosion, but there is much more to be said. The source of enion probabilities’ insensitivity is particularly important: the key to understanding the power of EPA is, I think, to understand why there is so little dependence between the statistical behavior of enions and their exact or even approximate states. To put it another way, what should be explained are the independence assumptions upon which the applicability of EPA depends:

- 1. Enion probabilities depend only on population-level variables of the sort tracked by statistical models.
- 2. The outcomes to which enion probabilities are attached are stochastically independent.

This is a project I tackle in Strevens (2003); an overview is given in Strevens (2005). The complete story is not something that I will undertake to give here. For the purposes of understanding the implications of compositionality for ontology, it will be enough to answer an easier question: where, in EPA, do the interactions go ?

We know that there are many interactions between hares and other hares, lynxes, and their environment. Try to track these many interactions and you will generate— so I have supposed—an immediate combinatorial catastrophe. Enion probability analysis, by representing the behavior of the system as a whole, represents the aggregate effect of these interactions. Yet it somehow, in its formalism, avoids having to represent the interactions explicitly—and so avoids having to aggregate them formally, bypassing the aggregation problem that would result. The interactions are packed away in some place in which they cannot get out of hand. Where ?

Consider the probability of a hare’s being killed by a lynx over the course of a month—the number that determines (more or less) the rate of hare predation. On what features of the lynx/hare ecosystem does the value of this number depend? What aspects of the system go into determining that the probability of hare death per month is, say, 0.05 rather than 0.1? The relevant factors include the total number of lynxes, the techniques that lynxes use to hunt hares, the techniques that hares use to avoid lynxes, the nature of the vegetation in the habitat, and more. Change any of these things in significant ways, and the magnitude of the probability of hare death will surely change.

The dependence of the hare death probability on the first of the enumerated factors—on the number of lynxes—is represented explicitly in the EPA model. The effect of lynx number on the probability is, in other words, “externalized.” What about the rest ? They are entirely internal to the probability, which is to say that their net effect is built into the probability—in formal terms, built into the 0.05; in metaphysical terms, built into the physical probability quantified by that number. As a consequence, the model need not explicitly take these interactions into account.

There I will pretty much leave the explanation of the miracle of EPA, taking away two claims about enion probabilities. First, the probabilities are not physically separate and independent entities. They are attached to physically independent entities— to different hares—but they physically overlap, since numerically identical states of affairs contribute to many distinct probabilities. The lynxes’ tactics, for example— ultimately a matter of lynx brain configuration, I suppose—help to determine each hare’s probability of death, as do many other shared aspects of lynx makeup. Follow the death probabilities for different hares down to the fundamental level, then, and they converge on many of the same fundamental-1 evel facts. In other words, the reduction or supervenience bases for any two hares’ death probabilities overlap. Contrast this with a standard wedding-cake theory such as Newtonian gravitation, on which each object contributes to and experiences the net gravitational field in virtue of a wholly spatiotemporally intrinsic property, its mass. The principles for enion probabilities’ individuation bear little resemblance to the principles for the construction of the wedding cake.

Second, it is this extrinsic and overlapping quality that makes it possible for EPA to avoid an explosion of complexity, opening the door to a compositional theory that shrugs off the aggregation problem. How so ? I venture that a compositional theory must, in order to be useful, individuate a system’s parts and properties so that they are in some sense largely independent. Enion probability analysis does not, in its delineation of the determinants of enion behavior, divide the world into factors that are physically independent. It does, however, divide the world into determinants of behavior that are * stochastically* independent, and here lies its power: the rules for aggregating stochastically independent determinants of behavior are far more tractable than the rules, in any interesting system, for aggregating physically independent determinants.

A metaphysical postscript: I have assumed that we find in the fundamental-level world all the materials, factual and nomological, that we need to build enion probability distributions, and I have shown that science uses the distributions so constituted to predict and explain a great many things. Let me now assert, as promised in the prefatory note to this paper, that this predictive and explanatory prowess confers on the probabilities a commensurate ontological status: they are real probabilities (Strevens 2011), and their individuating principle therefore traces a real ontological fault line.