We now move to the formalization of the coefficient of influence, first defining the notion of relevance quotient. But in order to provide this definition we must introduce the probability (function) we are using.

Regular probabilities

Let X_{V}X_{2}X_{n}, ... be a sequence of random variables denoting nindividuals of a system each bearing an attribute or, as we prefer to say, belonging to a cell of the class {1, ... ,d}. D: = X_{1} = j_{1} a ... a X_{t} = j_{i} a ... a X_{n} = j_{n}, for each i, j e {1, ... , d}, is the conjunction of n (atomic) propositions specifying the cells to which belong the first n individuals of the sequence. Thus D is the description of the (individuals of the) system with respect to the attributes being considered. Carnap called D a state description. We call an individual distribution D or, until section 8.4, the evidence (data). n, that is, the number of individuals considered in D, is the evidence size.

is the final (conditional) probability of X_{n+1} = j given D. Other names used for (2) are "predictive probability" and "prediction rule."

It is worth noting at this point that the probability function defined by (2) is a relative notion, that is, a function of two variables: X_{n+1} = j, the hypothesis, and D, the evidence. Moreover, we recall that Keynes called "premiss" (but sometime also "hypothesis" as in the definition of the coefficient of influence) what we call evidence, "conclusion" what we call hypothesis, and "argument" what we call probability function.

It can be useful to have at one's disposal an absolute probability, too. Such probabilities are to be used when the evidence is devoid of factual content, that is, when data are lacking. We shall denote by V evidence devoid of factual content, briefly, a void evidence. If this is the case, the evidence size is 0. An absolute probability is a function of one variable and can be defined as a special case of (2). In fact, if the evidence is void, (2) becomes P(X_{n+1}= j | V ), briefly P(X_{n+1}= j ). This is the initial (absolute) probability of X_{n+1} = j.

Beside the probability axioms we consider some further conditions. The first is:

For regular probability functions we define Keynes's relevance quotient (at D) of g against j, corresponding to (1), as

The condition of regularity ensures that K^{g}(D) is never meaningless. Considering the Keynes's symbology, we see that a/bh is P(a | hл b), the probability of a given h and b while a/h is P(a | h). Hence, if in (1) we put X_{n}+_{2}= j for a, D for h and X_{n+1} = g for b, we have (3).

Truly the name of K^{g}(D) should be heterorelevance quotient. The reason is that one can define a homorelevance quotient as

We have called K^{g}(D) relevance quotient because in what follows we do not deal with homorelevance quotients.

In order to grasp the meaning of the relevance quotient, consider an ordered sequence of individuals. With regard to this sequence we are interested in the cell the (n + 2)th term of the sequence, X_{n}+_{2}, belongs to. We know D, that is, the cells the individuals which take up the first n places of the sequence belong to. On the basis of these data we consider the probability that X_{n+2} belongs to the cell j. This is P(X_{n+2} = j | D ). Now suppose that it becomes known to us that the (n + 1)th individual of the sequence belong to the cell g other than j, that is, we know that X_{n}+_{1} = g holds true. With this additional datum, the evidence becomes D л X_{n}+_{1} = g and the probability we are interested in is P(X_{n}+_{2} = j | D л X_{n}+_{1} = g). (3) is the ratio between these two probabilities. Thus, the relevance quotient measures the strength of the further, so to speak, adverse datum X_{n}+_{1} = g on the probability of X_{n}+_{2} = j.

With this scenario clear, we can imagine a more natural relevance quotient. Suppose that the individuals we are considering are experimental trials, such as the repeated observations of the moon's crater Manilius or the computation of the number of persons killed by horse kicks. For the sake of simplicity, we focus on a familiar experiment, namely, drawing from an urn in which there are balls of d different colors. If this is the case, the evidence describes the colors of the first n drawn balls. On the basis of this data we consider the probability that the color of the next drawn ball, X_{n}+_{b} is j. This is P(X_{n+1} = j| D). Then we perform a further trial, the (n + 1)th, and ascertain that g, different from j, is the color of the (n + 1)th drawn ball. The evidence now is D a X_{n}+_{1} = g, and we are interested in the color of the ball we shall draw in the next trial, that is, the (n + 2)th. Hence the probability we are looking for is P(X_{n}+_{2} = j | D a X_{n}+_{1} = g). If we make the quotient of these two, so to speak, subsequent probabilities we have

This is what we call Carnap's relevance quotient, which is slightly different from Keynes's. We refer (5) to Carnap also if it has no role in Carnap's derivation, as it is essential in Carnap's tradition (see Costantini, 1979; 1987). In fact, (3) and (5) refer to two different situations. (5) measures the strength the observation of a color different from j has upon the probability that j is the color of the ball that we shall draw in the next trial. This scenario is much more realistic than that considered in (3). (5) refers to the evolution of a sequence of experimental observations comparing probabilities that it is very natural to compare when referring to such observations. Luckily, when echangeability holds true, the two quotients have the same numerical value.