The DP and DPLL Procedures with Historical Context

It was no accident that the first propositional provers were developed simultaneously. For one thing, computers of substantial speed and memory size were just becoming available outside of special locations. Moreover, IBM was looking to support projects that demonstrated that computers could do tasks other than numerical work. (All of the projects we review were executed at IBM except for the DP and DPLL projects.) Also, logicians became aware of work done by Newell and Simon on automating human problem solving and reasoning. We discuss this below.

Actually, the earliest program that addressed a task in logic was the implementation of the Presburger algorithm by Martin Davis in 1954. Davis programmed the IAS computer to execute the Presburger algorithm, which decides queries in elementary number theory involving only addition [8].[1] The IAS computer was a very early computer built at the Institute for Advanced Study directly under John von Neumann’s direction and its specifications were followed by a number of the early computers. That included the storage: 1024 40-bit words.

Two years later, in 1956, Newell and Simon completed the Logic Theorist, a program that explored how humans might reason when tackling some theorems from Whitehead and Russell’s Principia Mathematica [39]. A few logicians were also aware of the ongoing Geometry-Theorem Proving Machine project, an automated deduction program that proved theorems in high school geometry, making use of the diagrams presented to students [17, 18].[2] The appearance of the Newell-Simon paper lead some logicians to observe that algorithms using tools of logic certainly could well outperform the Logic Theorist. This is not to state that these logicians were unsympathetic to studying computer simulation of human reasoning, but they felt that any study of the power of computers in deduction warranted understanding what the best procedures can yield.

There were three major projects that were developed simultaneously with the DP procedure. We start with the method by P.C. Gilmore that is explicitly examined in the DP paper [12]. Shortly after joining IBM, in 1958 Paul Gilmore undertook programming a theorem prover to teach himself elementary programming. He also wished to attend the IFIP conference in Paris (1959), so he thought he could learn some programming and have a paper for submission to the conference at the same time. He knew of the Logic Theorist, and was sure that he could do better using some results from logic [20]. His proof method was motivated by the semantic tableaux of Beth and Hintikka, but Gilmore acknowledges that the result owes more to Herbrand and Gentzen.

Like most of the other provers we consider, Gilmore’s prover tested for validity of predicate logic formulas rather than unsatisflability of propositional formulas [21, 22]. However, his program centered on a propositional prover for unsatisflability, and we focus on this propositional prover.

Gilmore’s propositional prover effectively involved a succession of conjunctive normal form (CNF) formulas (a conjunction of disjunctions or “clauses”) being folded into an existing disjunctive normal form (DNF) formula, a disjunction of conjunctions. This involves “multiplying out” the new CNF formula over the DNF formula using the familiar distributive law ((a v b) л c) ^ ((a л c) v (b л c)). Using arithmetical notation (+ for v), (a + b + c + d)(ef + gh + ij) yields 12 partial products, and one sees the explosion that quickly overwhelms any computer. Gilmore, of course, saw this and used clever programming encodings to compress and speed the computation. One device, the truth list, allowed Gilmore to save variable names, important to Gilmore’s coding design. It also introduced a one-literal deletion rule, and propagation of this rule, that later was a key component of the DP algorithm.

Gilmore did not explicitly note the possible propagation effect, and could never observe it because any computer run was almost immediately terminated with memory overflow.

To understand the truth list, consider the CNF formula (a + b)c multiplied by the existing DNF formula cd + cf + ef + eg + be.5 After removal of contradictory clauses and redundant literals, we have acef + aceg + acbe + bcef + bceg + bce. The literal c occurs in each clause, as could be anticipated because we were “multiplying” a one-literal clause c into the existing DNF formula. When a literal appears in each clause it is added to the truth list and eliminated from each clause. Thereafter, any new occurrence of this literal or its complementary literal is ignored.[3] [4] Also, after the contradictory clauses containing c were removed during simpliflcation, the variable e now occurs in each clause, as seen above, and is treated as a one-literal clause;

D. Loveland et al.

318

it is removed and entered on the truth list. In this sense the one-literal clause property propagates. To justify the truth list action, note that were the literal retained, later reentering that literal in any clause would cause merging with the existing literal; adding the complementary literal would cause that clause to be contradictory.

Dunham, Fridshal and Sward (DFS) developed a method strictly for testing validity in the propositional calculus [13, 14]. The DFS method shares with DPLL the branching property, and seems to be the first method to use this property. By splitting the given formula recursively, the reduced formula allows simplification at each level, which for many formulas allows shallow trees and fast processing. Formulas were placed in negative normal form (NNF) which requires that all negation symbols be next to variables; the formula is not processed further. (This is not a true normal form.)

A literal is in a partial state if there are no complementary literal occurrences in the formula and no literal occurrence in the scope of an ^ symbol. A literal not in a partial state is in a full state. At each level literals in a partial state are removed by setting their truth value to 0 (false) and using the simplification rules given below. (This is a generalization of the affirmative-negative rule of the DP procedure.) Since no partial state literal is in the scope of a negation or an ^ symbol, assigning false to any such literal is consistent with any falsifying assignment, if one exists.

The method initiates the validity test by simplifying the given formula and then branching on a full state variable, if one exists. Simplifying a formula involves use of the following simplification rules along with the associative and commutative properties of л, v, ^, and returns the reduced formula to NNF if needed.

Here L denotes a literal, and Ф denotes a formula. Note that, upon simplification, one always has a nonempty formula or a 1 or 0.

We give the program as a recursive algorithm although the implementation was an iterative program. This presentation invites comparison with the DPLL presentation of this paper; the similarity is notable, but is primarily a consequence of the use of branching. We note that this paper appeared two years before the DPLL paper. (However, the DPLL method design was not influenced by this paper. See the summary of the DPLL creation process given later.) It should be mentioned that Dunham, Fridshal and Sward explicitly promote the branching device, with the split formulas conducive to further simplification.

In the DFS- recursive algorithm, Ft denotes the simplified formula after all occurrences of literal t have been replaced by 0.

Hao Wang, looking far ahead, envisioned proof methods that could tackle problems such as proving that V2 is irrational and well-known analysis theorems (e.g., the Heine-Borel theorem). He argued, however, that it would be wrong to attempt to skip over provers for the predicate calculus in the pursuit of mechanical mathematics [49].

Algorithm 1: DFS-recursive(F, p)

Input : A NNF formula F and an initially empty partial assignment p

Output: {the top-level return value} valid or non- valid

begin

(F, p) — PartialVariable (F, p) if F = 1 then return valid if F = 0 then return non- valid

l a variable not assigned by p // the branching step if DFS-recursive (Fl, p U {l}) = non- valid then return non- valid return DFS-recursive (Fi, p U {—l})

sub PartialVariable(F, p) begin

while there exists a literal l in partial state do

F Fi

p p U{l}

return (F, p)

Wang produced programs for the predicate calculus that tackled several decision problems, subdomains of the predicate calculus, and along the way proved all 350 theorems in Principia Mathematica that fell within the realm of the predicate calculus with equality. Surprisingly, this could be done with a program that could handle propositional formulas, a restricted part of the AE predicate calculus and a scattering of other formulas [49-51].[5]

Wang developed a complete validity-testing propositional prover for use within his first-order logic prover, and did not dedicate himself to optimal speed in this prover. He, in fact, acknowledged that the Dunham, Fredshal, and Sward and the DP provers were superior on complex formulas. (Wang knew of the DP provers only in preparation of his paper for publication, whereas he likely knew of the other programs through the IBM connection.)

In spite of Wang’s lack of focus on the speed of his propositional prover, we do comment on this prover as it was developed coincident to the other early propositional provers. Also, this allows us to note the major accomplishments of Wang’s work, executed in this same time window. The prover was based on a Herbrand-Gentzen cut- free logical framework. He employed ten connective introduction rules formulated in terms of sequents. Axioms are of the form X ^ n, where X and n are nonempty strings of variables (atomic formulas), and share a common variable.

D. Loveland et al.

320

There are two connective introduction rules for each of the five connectives —, v, л, э, and The proof search is done in reverse order, from theorem statement to

axioms, forming a tree as induced by the rules that have two premise sequents. We give four of these rules followed by a simple example to illustrate the proof technique. Here q,Z,k, p,n are formula strings (perhaps empty or a single formula) and p and Ф' are formulas. (The rules for negation are correct; the introduction of negation is applied to formulas in extreme positions.) The proof search reduces the leftmost symbol. The proof search system is propositionally complete with the full rule set.

We give a proof of a valid sequent in the manner of proof search Wang employed, but with a different line notation than adopted by Wang. On the left is the line number. To the right we indicate the line for which this line is the antecedent by the rule named. The task is to reduce the purported theorem to axioms by applying the rules in reverse. Rule (2b) causes a split, resulting in a simple search tree. Wang explored search trees depth first.

The given formula is a valid sequent, and a valid formula if the sequent arrow is replaced by the implication connective.

The fourth method undertaken in 1958 is the Davis-Putnam procedure. (Although the labels method and procedure usefully define schema and their complete specification, respectfully, we have followed tradition and refer to the DP and DPLL procedures. The label “procedure” is correct if one views unspecified decision points as selection of the first qualified item. The refinements to the DPLL “procedure” discussed here certainly dramatize that DP and DPLL are methods.) Like Gilmore’s method, the DP procedure is a complete prover for validity in the predicate calculus using a propositional prover that tests for unsatisflability. The propositional procedure was conceived in the summer of 1958, and expanded to the quantification theory procedure in 1959. The reason work on the procedure spread over two summers is that their focus was elsewhere, on Hilbert’s 10th problem. Although their 1960 paper reads as if they were motivated by the propositional inefficiencies of the Gilmore and Wang methods, Davis and Putnam had no knowledge of the parallel efforts when they undertook this project [9]. The setting that spawned the DP procedure is described in [10].

Many of the readers of this paper know the DP rules, but it is important to note that not only did Davis and Putnam introduce significant rules, they also introduced the entire structure now used by most of the automated deduction provers. If one recalls the three methods previously discussed, one notes none used CNF pursuing an unsatisfiability test. One advantage of the DP format is in formula preparation. A conjectured theorem is negated by simply negating the theorem assertion and placing each axiom individually in CNF. This is very significant if the axiom set is large. Also, seeking satisfying assignments is more intuitive than seeking falsifying assignments. Certainly the pursuit of the SAT problem is much more appealing than pursuing the falsifiability problem. At the first-order level they introduced the use of Skolem functions.

We now consider the DP procedure. The goal of the procedure is to report a model (i.e., satisfying truth assignment) for a formula, if the formula is satisfiable, or otherwise, report “unsatisfiable.”

We present the rules of the DP procedure not as one would implement them, but as a concept. The focus of these rules is on variable elimination. At each stage a variable p is chosen and the formula is written in the form (A v p) л (B v—p) л R, where A, B and R are formulas in CNF free of the variable p (or its negation). This is done using the distributive laws, if necessary. CNF formula A is the conjunction of all clauses that contained p, but with the p removed; similarly for B.

I The one-literal rule. If A and B both contain the empty clause, then the algorithm terminates with a declaration of unsatisfiability. The empty clause is always false for any truth assignment, so both conjunctions A and B are always false and can be discarded. This leaves p and — p as one-literal clauses, which is contradictory. Otherwise, suppose (only) A contains the empty clause. Then p must be true if the given formula is satisfiable, so one can delete A, p and — p. That is, the formula (A v p) л (B v—p) л R is unsatisfiable iff B л R is. The working clause set becomes B л R and we can add p to the partial model. We have the symmetric case if B contains the empty clause.

II The affirmative-negative rule. If p is absent in the working clause set then — p may be declared true and added to the partial model. Since (A) л (B v—p) л R is unsatisfiable iff A л R is, the working clause set becomes A л R. Again, we have the symmetric case for — p.

III Rule for eliminating atomic formulas. (A v p) л (B v—p) л R is unsatisflable iff (A v B) л R is unsatisflable. The working clause set becomes the CNF formula for (A v B) л R. Note that each clause of the CNF of (A v B) is the disjunction of a clause from A and a clause from B. In any interpretation, one of p and — p is false, so one of A or B has a true literal in each clause. Thus, Rule III preserves models of the formula upon which it operates. Any tautological clause is removed.[6]

If the working clause set is the empty set of clauses, then the original formula is satisflable. To flnd a model for the given formula one has a start with the partial model, but must do further work with the other variables to flnd a satisfying truth assignment. Although each rule eliminates a variable, we stick with the original label for rule III which declares that explicitly. It is now referred to as the resolution rule.

The implementation is, of course, somewhat different from the above deflnition. One selects one literal clauses in turn and removes clauses containing that literal and removes the complementary literal from all clauses. One then seeks to remove clauses that have a literal with no complementary literal elsewhere (Rule II). After Rules I and II are exhausted, the algorithm calls for Rule III to be applied. Here the original algorithm called for selecting a literal from the flrst clause of minimal length and eliminating that literal. One then returns to Rule I. This whole program is repeated until either the empty clause is obtained (unsatisflability), or there are no more clauses to treat (satisflability).

We might note that the afflrmative-negative rule is a global one, whereas the other rules are not. However, if all clauses are in fast memory, and one builds literal lists that point to all occurrences of that literal, then one can quickly apply the affirmativenegative rule to any literal where the complement literal list is empty.

Davis [10] mentions that the splitting rule was considered as an option. It would have been surprising if Davis and Putnam had not considered any such reasonable alternative. The choice of the resolution-style Rule III was an intelligent one on paper. Rule III is certainly the more elegant one, and better suited their focus of immediately eliminating one variable per operation, for certainly one of the split formulas would be considerably delayed in its treatment. We state the splitting rule as a lead into the discussion of the DPLL variant.

IIP The Splitting Rule. Let the given formula F be put in the form (A v p) л (B v —p) л R where A, B and R are free of p. Then F is unsatisflable if and only if A л R and B л R are both unsatisflable.

Although the DP procedure was conceived in the summer of 1958, the birth only occurred in the summer of 1960, and it was a still-birth. The rule III proved to be too space-consuming; it was the distributive law application that killed the Gilmore procedure, in miniature. One took the product of all the clauses containing literal p with those containing — p, minus those two literals. The concept of eliminating a variable, so elegant in theory, proved too costly in implementation. So, the Davis- Putnam-Logemann-Loveland variant, discussed below, was born of necessity.

In the spring of 1960, Martin Davis recruited George Logemann and Donald Loveland to undertake the DP implementation that summer. The two students, early in their Ph.D. pursuit, were glad to have a summer job, and an exciting one at that. George Logemann came to the Courant Institute of Mathematical Sciences (CIMS) at NYU to study applied math at then perhaps the top applied math program. (In 2014 it is so ranked.[7]) He was a skilled programmer, acquiring some of that experience at CalTech where he did his undergraduate work. (After receiving his Ph.D. Logemann worked as a computer scientist for a few years, then, an accomplished cellist, he pursued a musical career, which included computer music. He passed away in 2012.) Donald Loveland considered himself a probability/statistics student, as there was no logic opportunity at the CIMS, Davis having moved to Yeshiva University. (See [10] for Davis’s view of this period.) Loveland’s interest was in artificial intelligence (AI), and both logic and probability theory were good mathematics background for that. He already could be said to be in the field of Automated Deduction, had that field and name then existed, having been a programmer on the Geometry-Theorem Proving Machine [18].

The task of implementing the DP procedure was split. Logemann undertook the parsing of the CNF formula, entered in Polish notation, the formula preparation having been done by hand. Logemann also handled the structuring of the set of clauses, while Loveland took on the testing of the clause set for consistency. The program was written in SAP, the Symbolic Assembler Program, the assembly language for the IBM 704. After the first runs, which quickly saturated the 32,768 36-bit words of available storage, George Logemann suggested that Rule III be replaced with a splitting rule. He noted that it was easy to save the current environment on tape, and retrieve it on backtracking. As for the other systems with splitting rules, this led to depth-first search. Thus, with the new program, instead of clause addition there was clause removal. Interspersed with applications of Rules I and II, the program recursed on Rule IIP, saving the environment on tape for backtracking. This solved the space issue, at least until the input clause set overwhelmed the memory. Now very important, but not emphasized at the time, was the easy definition of a satisfying assignment should there be one. It is unclear why “turning off” and then “turning on” various clauses and literals on backtracking was not pursued, as writing to tape is a slow operation. Using the appropriate list structures, top level routines existed that quickly deleted all occurrences of a specified literal and eliminated the appropriate related clauses. These could just black out these entities instead. But the chosen implementation worked well; considerable pains had been taken in data structure design and in coding to minimize run times. There was awareness of the importance of choosing the right one-literal clause for Rule I, although no experimentation was done in this regard.

D. Loveland et al.

324

We now briefly consider an actual implementation of the DPLL procedure. Algorithm 2, DPLL-recursive (F,p), sketches the basic DPLL procedure on CNF formulas. The idea is to repeatedly select an unassigned literal t in the input formula F and recursively search for a satisfying assignment for F |t and F |-t. (The notation F |t denotes the formula F with t set to true followed by formula simplification, done by the removal of true clauses and false literals.) The step where such an t is chosen is commonly referred to as the branching step. Setting t to true or false when making a recursive call is called a decision, and is associated with a decision level which equals the recursion depth at that stage. The end of each recursive call, which takes F back to fewer assigned variables, is called the backtracking step.

Algorithm 2: DPLL-recursive(F, p)

Input : A CNF formula F and an initially empty partial assignment p

Output: UNSAT, or an assignment satisfying F

begin

(F, p) —- UnitPropagate(F, p) if F contains the empty clause then return UNSAT if F has no clauses left then Output p return SAT

l — a literal not assigned by p // the branching step if DPLL-recursive(Fi, p U {l}) = SAT then return SAT return DPLL-recursive(F^i, p U {—l})

sub UnitPropagate(F, p) begin

while F contains no empty clause but has a unit clause x do F — Fx

p p U{x}

return (F, p)

A partial assignment p is maintained during the search and output if the formula turns out to be satisfiable. If F ^ contains the empty clause, the corresponding clause of F from which it came is said to be violated by p. To increase efficiency, unit clauses are immediately set to true as outlined in Algorithm 2; this process is termed unit propagation. Pure literals (those whose complement does not appear) are also set to true as a preprocessing step and, in some implementations, during the simplification process after every branch. Modern solvers can set millions of variables per second in the unit propagation process. The number of backtracks per second is in the order of several hundred thousands. Unit propagation is generally the most time consuming component of the SAT solving process. Therefore, the more efficient the unit propagation process is implemented, the better the SAT solver.

Variants of this algorithm form the most widely used family of complete algorithms for formula satisfiability. They are frequently implemented in an iterative rather than recursive manner, resulting in significantly reduced memory usage. The key difference in the iterative version is the extra step of unassigning variables when one backtracks. In this step, generally large numbers of variables (hundreds to tens of thousands) assigned via unit propagation need to be unassigned. The naive way of unassigning variables in a CNF formula is computationally expensive, requiring one to examine every clause in which the unassigned variable appears. However, a clever data structure approach involving so-called watched literals gets around these inefficiencies.

  • [1] Almost all of the papers cited in the introduction can be found in [46].
  • [2] This project was also executed at IBM.
  • [3] c denotes not c here.
  • [4] A complementary literal has the same variable name but opposite negation status.
  • [5] An AE formula is a formula with all quantifiers leftmost with no existential quantifier to the leftof a universal quantifier.
  • [6] A tautological clause contains p v—p for some variable p.
  • [7] U.S. News and World Report: Best Grad Schools 2014.
 
Source
< Prev   CONTENTS   Source   Next >