Introduction

Year 1977: Martin Davis appears in print with “Applied nonstandard analysis” [14], whose subject is less close to computability and computational logic than the various areas to which Martin has contributed before. Nevertheless we will tie that book— appropriately, we believe—to another publication of the same year, “Metamathematical extensibility for theorem provers and proof-checkers” [23, pp. 120-146], jointly authored by Martin and his friend and colleague “Jack” (namely Jacob T. Schwartz).[1] We aim at unveiling an affinity between some of the matter which that book treats in preparation for analysis proper and the field of automated reasoning of which Martin has been a trailblazer since its early days,[2] and at taking advantage of that link for a proof-checking undertaking which we see as promising.

Martin’s book is dedicated to the memory of Abraham Robinson, the creator of nonstandard analysis. At the Summer Institute for Symbolic Logic held at Cornell University, a scientific gathering that both had attended in 1957, Robinson gave a talk in which he “made the provocative remark that the auxiliary points, lines, or circles ‘constructed’ as part of the solution to a geometry problem can be thought of as being elements of what is now called the Herbrand universe for the problem” [15, pp. 7-8].[3] At the same meeting Martin reported on his own implementation, three years earlier on a JOHNNIAC machine, of Presburger’s decision procedure for elementary additive number theory [12]. This proximity of interests between the two distinguished scholars about automating proofs was, presumably, coincidental.

In 1977, on the other hand, disappointment is beginning to take place in the automated deduction community (see [5]), as researchers experience the combinatorial explosion plaguing the automatic search for mathematical proofs even if pruned by the best available techniques. More emphasis is now placed on comfortable interaction between man and computerized proof assistants, and on proof checkers (see, e.g., [38]) as opposed to fully automatic theorem provers. Specific knowledge pertaining to diverse branches of mathematics begins to be perceived as essential for an advancement of the proof techniques; Ballantyne and Bledsoe [3] (see also [2]) succeed in automating the proofs of hard theorems in analysis using methods which rely on the nonstandard viewpoint.

The new context brings to the fore issues related to correct-program technology and proof engineering. An emblem of the times is the Clear specification language [7], paving the way to the OBJ family of languages, which will integrate specification, prototyping, and verification into a system with a single underlying logic: theorem-proving is now aimed at providing mechanical assistance for proofs that are needed in the development of software and hardware. This is the scene encountered by the joint work [24] of Martin and Jack, at the dawn of large-scale proof technology.

Here is the issue they raised. “For use of mechanized proof verifier systems to remain comfortable over a wide range of applications, ••• it should be possible to augment the system by adding new symbols, schemes of notation, and extended rules of inference of various kinds” [24, p. 217]. A stringent requirement is that the envisioned changes to a system do not disrupt its soundness: a proof verifier should, therefore, be furnished with the metamathematical capability of justifying its progressive augmentations.

Use of a metamathematical extension mechanism, [24] points out, leads to the common acceptance of algebraic calculations in lieu of detailed predicate calculus proofs. Although recourse to the methods of nonstandard analysis in lieu of the e-S methods is not mentioned in that paper, we see that less familiar but expedient detour as being in accord with the matter under discussion.

As an arena for experimenting with this circle of ideas, we have undertaken a merciless formal remake of [14, Chap. 1] with Jack’s proof checker Ref, see [35, Chap. 4], which embodies a variant of the Zermelo-Fraenkel set theory. This task, which has hardly anything to do with analysis perse, is an essential prerequisite if we are to bring the methods of nonstandard analysis within the scope of Ref. As a result of the “mathematical simplicity, elegance, and beauty of these methods”—and of “enthusiasm ••• not unrelated to the well-known pleasures of the illicit”—, we expect to eventually get the reward of “their far-reaching applications” (see [14, p. viii]).

Our effort will also suggest changes to Ref’s current implementation which can improve its metamathematical extensibility.

We have set up substantial ground for specifying and proving, by means of the Ref verifier (very succinctly described in Sect. 10.6), the two consequences of Los’s theorem which we need (namely, Theorems 10.1 and 10.2 in Sect. 10.3): once we will have fully achieved those goals, we will move on to work on Robinson’s concurrence theorem and on a few other crucial propositions (Theorems 10.3,10.4,10.5, and 10.6 of Sect. 10.4). To complete our job we must then introduce “schemes of notation and extended rules of inference of various kinds” that properly assist Ref’s users in exploiting nonstandard methods.

In order to reach the goals of our experiment, we must express in set-theoretic terms metalevel notions such as the evaluation of a sentence in a universe; another not entirely trivial task concerns the representation of individuals (thought of as ‘nonsets’) within a formal system which deals with sets whose construction ultimately relies on nothing but the null set 0. For these two matters, to be discussed in Sect. 10.9 and in Sects. 10.7 and 10.8 respectively, our experiment is innovative, at least as regards the Ref proof checker. In other respects, we can benefit from work previously done: among other things we found, already adequately formalized, a theory of ordinal numbers conceived a la Raphael M. Robinson, and the ultrafllter theorem obtained using Zorn’s lemma.

Concerning proof checkers, issues of reuse have an even greater relevance than for theorem provers. Such issues pertain more to proof engineering than to computational logic:4 rather than going through the same proof pattern several times, one should abstract a common method to be recalled over and over again, with all the conveniences offered by technology.

Reuse is supported in Ref by a construct named ‘Theory’ (see [31] and [35, pp. 19-25]), similar to—although of a less algebraic nature—a mechanism for parameterized specifications of the aforementioned Clear specification language. This paper will discuss how to organize Theorys that enable one to tackle without reiteration of techniques the foundations of nonstandard analysis; hopefully, it will stimulate reflections on good “proof hiding” practices, of the kind which Martin’s passage [14, p. 42], quoted in the epigraph to this paper, seems eager to suggest.

  • [1] See [22] and, therein, the enjoyable [16]; see also [21] and [1, pp. 478-480]. The above-cited [23]led to the sole joint publication by Martin and Jack, namely [24].
  • [2] Landmark contributions of Martin to automatic theorem-proving in 1st-order predicate logic havebeen [10, 13, 19, 20, 25], historically occurring between Paul C. Gilmore’s and Dag Prawitz’methods, on the one hand, and J. Alan Robinson’s resolution principle on the other. Concerning thelinked conjunct method then proposed by Martin and his team at Bell Labs, see [29, 39].
  • [3] The term ‘Herbrand universe’, today widely used, appeared for the first time in the influentialpaper [13] (reviewed in [34]); but [17, p. 432] contends that it would be more historically correctto credit the construction of that universe to Thoralf Skolem.
 
Source
< Prev   CONTENTS   Source   Next >