Dishonest Decisions

It is standard to classify the difficulty of a problem according to its membership in a set of functions or relations, for example, whether it is Turing-computable, in polynomial time, or in polynomial space. Computational models, such as Turing machines with arbitrary outputs, compute sets of (partial) functions, whereas decision models, such as finite automata or Turing machines with only “yes” or “no” outputs, compute sets of relations.

We argue that computational families, which implement functions, capture the essence of computational power more accurately than do decision families, which implement relations. For that reason, we based the notion of honest implementation and complexity, even for decision problems, on functions rather than on decision procedures. The underlying reason for the better adequacy of functions than relations is that the former can also comprise the means to generate the representations of objects.

As defined in Sect. 6.5, a computational family over a domain A is a set of functions F c{f : A* ^ A}; likewise a decision family over A is a set of relations R c {r c An | n e N}. Specific computational or decision families are defined via some internal mechanism, a model of computation, a point that will play a role in our arguments later.

Decision families are inherently incomplete, in that one can readily “increase” their power via a representation that adds some information on top of the represented element [3]. For example, let h be an incomputable decision problem over I*, and consider the representation p : I* ^ IF, where p(w) = h(w)w. (The representation just adds the incomputable bit h (w) before the word w.) Then, Turing machines can “decide”, via the representation p, both h and all of the ordinary Turing-decidable problems.

Surprisingly, the weak computational model of finite automata (FSAs) is already powerful enough to decide, via a suitable representation, any countable set of (decidable or undecidable) relations [13]. The representation hides with each domain element a finite amount of data of relevance to finitely-many relations, such that each decision procedure gets all the data it needs from the represented inputs.

Let I be the binary alphabet {0, 1} throughout the remainder of this section.

Lemma 6.2 ([13]) For every countable set R of relations over the natural numbers N, there is an injection p : N ^ I*, such that the set FSA of finite automata simulates R via p, viewing a relation r c N as a Boolean function r : N ^ I.

Accordingly, a FSA a computes the function a : I* ^ {p(0), p(1)}, returning p(0) when the input word is rejected and p(1) when accepted.

Proof Let r1, r2,... be any enumeration of the relations in R. Define the representation p : n ^ r1(n) r2(n) ??? rn(n). For every n, the length of p(n) is n, and it gives explicit answers to the first n relational questions ri. Now, for every relation ri e R, consider the FSA ai depicted in Fig. 6.4. One can easily verify that for each m e N, the automaton ai accepts p(m) iff m e ri. For an input word w of length m > i, ai finds the answer whether m e ri at the i th digit of w. For the finitely-many inputs of

The finite automaton ai, which implements an arbitrary relation ri via the representation p of the proof of Lemma 6.2

Fig. 6.4 The finite automaton ai, which implements an arbitrary relation ri via the representation p of the proof of Lemma 6.2

length m < i, representing numbers up to (but not including) i, the first i states of ai are fixed to accept (and “return” p(1)) or reject (returning p(0)) the input word p(m) according as to whether m e ri. ?

One might have presumed that this disturbing sensitivity to representations would be resolved by limiting representations to bijections, but this is unfortunately not the case, as shown in [13].

Theorem 6.5 ([13]) For every countable set R of relations over N there is a bijection n : N ^ such that the set FSA of finite automata closely implements R via n.

The above-described inherent incompleteness of decision families, that they can easily be enlarged by representing input differently, stems from their inability to generate the representation of the input. On the other hand, as shown in Theorem 6.1, the set of recursive functions is complete, in the sense that it cannot honestly implement an incomputable function, regardless of the choice of representation.

A question naturally arises considering the completeness of recursive functions (Theorem 6.1) and the inherent incompleteness of decision families (Lemma 6.2 and Theorem 6.5): Where does the proof of Lemma 6.2 break down if we try to modify it to demonstrate that every set of functions can be computed, via a suitable representation, by finite-state transducers (input-output automata)?

The answer is that, with the aim of computing a countable set of functions {fb f2,-- •}, the representation that is used in the proof of Lemma 6.2 may be generalized to something like p(n) = f1(n) $ f2(n) $ ••• $ fn (n).Then, for every function fi, there is indeed a transducer ai, such that, for every n, we have ai (p(n)) = fi (n). This, however, doesn’t fit the bill. To properly represent fi, we need for ai to return p( fi (n)), not fi (n). One might be tempted to suggest instead a representation n that already provides the represented values, as in n(n) = n(f1(n)) $ n( f2(n)) $ ••• $ n( fn (n)). This is, however, a circular definition: Let f1 be the successor function s. Representing 1, we have n(1) = n(s(1)) = n(2) = n(s(1)) $ n(s(2)) = ???.

Finally, it may be worthwhile noting that Turing’s halting problem is immune to the particular representation of programs [1], as are similar problems, though—as we have seen—decision procedures are quite sensitive to the representation of input data. Here the problem is to decide whether machine TMm halts on input string w. Problem instances are pairs {rmn, w) consisting of an encoding rmn of the machine along with the input w or an encoding rwn thereof. However, the pairing function itself must be honest, as explained in Sect. 6.7. In that situation, the encoding of any given machine (or computer program) can only hide a finite amount of information, not enough to answer the halting problem for all inputs to the machine, though the representation of those inputs themselves could hide the answers.

 
Source
< Prev   CONTENTS   Source   Next >