The Long Wait

The DP and DPLL procedures were developed in the two year period from 1958 to 1960. Somewhat incredibly, there were no further developments for propositional reasoning or SAT solving for another thirty years. Only in the early 1990s, did we start to see new developments and research in this area. However, the progress since then has been dramatic. Around 1990, using the basic DPLL procedure, one could solve CNF formulas with a few hundred variables and a few hundred clauses in a reasonable amount of time (up to several hours of CPU time), but practical formulas with more than around 500 variables and clauses were out of reach. This kept propositional reasoning largely of academic interest because most real-world reasoning tasks, e.g., in verification or AI planning, require tens of thousands or hundreds of thousands of variables and clauses when encoded as CNFs. Fortunately, we have since seen dramatic progress. In the last two decades, new SAT solvers have been developed that can now handle formulas with up to 10 million variables and over 100 million clauses. This has opened up a whole range of practical applications of SAT solver technology, ranging from program verification, to program synthesis, to AI planning, and mathematical discovery. It is safe to say that no-one could have foreseen such dramatic advances. Much of the progress has been made by building on the ideas behind DPLL. We will summarize the extensions to DPLL below.

Before we proceed, let us briefly reflect on why there was a thirty year “standstill” in the area of propositional reasoning or SAT solving. Firstly, in the early days of theorem proving, it was noted that more expressive formalisms were needed to obtain interesting results e.g. for proving theorems in mathematics and other applications. This led to an extensive research effort on first-order theorem proving based on predicate calculus. Robinson’s resolution based solvers provided a major step forward. However, the search space for first-order theorem proving is substantially more difficult to explore than in the propositional domain. The difficulty with such provers led to a shift towards proof assistance and reasoning support systems instead of fully automated solvers. Another difficulty for a propositional approach is that even if translations to SAT were possible, they tended to be much too large for main memory at the time.

The 1970s saw rapid developments in computational complexity theory, introducing the problem class NP, with SAT as the first problem demonstrated to be complete for the class NP [7, 33]. Under the widely believed assumption that P=NP, this means that no polytime algorithm for SAT exists. Given the worst-case exponential scaling of SAT procedures, the general sense was that encoding reasoning tasks and other problems into SAT was not a promising avenue to pursue. Of course, much of this point of view is predicated upon the idea that worst-case complexity provides a good measure for the scaling behavior of algorithms and search methods on practical problem instances. Our recent experience with SAT solvers has shown that this perspective on problem complexity needs to be reconsidered. More specifically, although the basic DPLL backtrack search procedure for SAT shows exponential scaling behavior on almost all problem domains, the recently added extensions of the procedure brings the scaling down to a very slow growing exponential or even a polynomial in a number of practical problem domains. The work on backdoor variables discussed in Sect. 12.4 provides a formal analysis of this phenomenon [52]. To understand this phenomena at an intuitive level, it is useful to think of modern SAT solvers as combining a basic backtrack search with clever polynomial time algorithmic strategies that can exploit hidden problem structure in the underlying domains. These algorithmic strategies are invoked repeatedly throughout the backtrack search process, and are often so effective that hardly any backtracking is required in order to find a solution or prove that the formula is unsatisfiable. The most basic, but already powerful, algorithmic strategy is unit propagation, which occurs after each branching point in the search. Unit propagation was already part of DPLL but the propagation process has been implemented with incredible efficiency in modern solvers. Unit propagation is combined with other clever algorithmic ideas such as clause learning, branching variable selection, non-chronological backtracking, and random restarts (see Sect. 12.4). Modern SAT solvers have shown that developing new algorithmic strategies in the context of NP-complete problems is in fact a useful pursuit with real practical impact. It would be incorrect to think of these improvements as just clever “heuristics.” The term “heuristics” is generally reserved for algorithmic ideas that are somewhat adhoc and often cannot be formally analyzed. However, the ideas implemented in modern SAT solvers combined with an understanding of general domain properties do allow us to derive formal guarantees on the overall behavior of the solver, at least in principle.

In the early nineties, there was renewed interest in the field of knowledge representation and reasoning (KR&R), a subfield of AI, about how to deal with worst-case intractable reasoning tasks. During the 1980s, researchers had developed a series of knowledge representation languages that allowed for worst-case tractable reasoning but the formalisms were generally considered too restrictive for real-world application. This raised the question as to whether one could allow some representation language constructs that would lead to intractable inference in the worst-case but perhaps such worst-case behavior might not be observed in practice. These questions were first explored in a propositional setting by considering SAT encodings of reasoning problems and the behavior of SAT solvers on such encodings. It was found that on certain random SAT problem distributions, DPLL scaling could go from polynomial to exponential depending on the choice of problem distribution [36]. The work also led to new types of SAT solvers based on local search style stochastic algorithms, such as GSAT and WalkSAT [44, 45]. These solvers are incomplete in that they can be quite effective in finding satisfying assignment but cannot show unsatisfiability. The WalkSAT solver could solve problem instances beyond what the basic DPLL could handle. This led to promising practical results on SAT encodings of AI planning problems [32]. Interestingly, propositional AI planning is actually a PSPACE-complete problem, because certain plans can be exponential in the size of the problem instances. To bring the problem down to NP-complete, one simply considers the bounded plan length version of AI planning, where one only considers plans up to a predefined length of k steps (k is polynomial in the original problem size). The work on translating bounded length AI planning into SAT provided the impetus to explore a similar approach for hardware and software verification, by considering the so-called bounded model-checking problem and translating it into SAT. (In the bounded model-checking problem, one explores whether there exists an execution of up to a fixed number of steps that leads to an undesirable state. Such a “bug trace” is very similar to a plan that reaches a certain goal state from a given initial state in an AI planning formulation.) SAT-based planning and verification approaches have become very successful over the last decade or so. The success of stochastic incomplete solvers led researchers to search for extensions of the DPLL strategy to improve the complete solvers. This led to the dramatic increase of performance of such solvers. In most applications, the complete solvers are now more effective than the stochastic solvers but stochastic solvers are also still being improved upon. Moreover, by moving to distributed solvers running on cloud based platforms, a scale up to 10 million variable and 100 million clause problems would appear to be within reach. This would again open up a range of new applications, for example in program synthesis and mathematical discovery.

< Prev   CONTENTS   Source   Next >