In the project, an approach to logic-based knowledge processing, to be called here second-order approach, will be investigated. The idea is that expressive representation languages are, on the one hand, available to formulate applications in a very natural way, but on the other hand, are processed by reduction to poorer languages that are better suited for computation. Such reductions are made possible because the expressive language constructs are used in specific application contexts.
Proofs of concept for this approach have been given in the late 90s with the use of second-order quantifier elimination for computing circumscription, and recently with applications of uniform interpolation and forgetting (which can be expressed as second-order quantifier elimination) in description logics. The approach typically allows to trace concepts from the application area back to fundamental concepts of logic, indicating new application possibilities and parallels between areas that enable transfer and combination of techniques.
The project connects two lines of work that will stimulate each other:
The basic idea of view-based query processing is to translate and decompose queries such that multiple agents with specialized knowledge bases and processing capabilities can contribute to answer computation. By its is close relation to definability, a general concept of logic, view-based query processing is well suited to the approach of the project. Central objectives of the project with respect to view-based query processing are semantic clarification of the involved concepts, availability of second-order operator elimination to compute query transformations, and generalization beyond databases.
With respect to computational methods, new techniques for second-order operator elimination will be developed, based on propositional logic – related to recent SAT preprocessors, based on first-order logic, and by taking specific formula patterns into account that occur in view-based query processing.
The results of the project should serve as an elaborate formal and automated foundation for a multitude of further applications that are suited for the second-order approach. These include, for example, variants of non-standard inferences, abductive reasoning, modularization of knowledge bases, non-monotonic reasoning, and reasoning about knowledge.
Second-order quantifier elimination (SOQE) is the problem of computing from a given logic formula involving quantifiers upon second-order objects such as predicates an equivalent first-order formula, or, in other words, an equivalent formula in which these quantified second-order objects do no longer occur. The problem can be studied for various logics, including classical propositional and first-order logic as well as modal and description logics. In slight variations SOQE is also known as forgetting, projection, predicate elimination and uniform interpolation.
SOQE bears strong relationships to Craig interpolation, definability and computation of definientia, the notion of conservative theory extension, abduction and notions of weakest sufficient and strongest necessary condition, to correspondence theory relationships, as well as to generalizations of Boolean unification to predicate logic.
Various important research subjects of current interest are based on SOQE and these related notions, as is reflected in the topics addressed in workshop program: In the area of knowledge representation they include forgetting, uniform interpolation and abduction in description logics, modularization, reuse, versioning and summarization of ontologies, forgetting in variants of logic programming that are relevant for ontology reasoning, and query reformulation on the basis of interpolation. The unified correspondence research program, which recently emerged from the study of algorithmic correspondence and canonicity, now by itself reaches into further areas such as proof theory. SOQE has applications in the verification of distributed systems. The investigation of SOQE with respect to specific fragments of first-order logic is -- much less researched than decidability -- an area with open challenges, where, however, ways of progress can be indicated. For Boolean unification on the basis of predicate logic, like SOQE an operation with formulas as output, various relationships to SOQE can be shown. In the Algebra of Logic program of the 19th century SOQE was identified as an important operation, which makes the study of historical aspects and the passage of SOQE to modern logic particularly interesting. Special forms of SOQE are essential components of state-of-the-art SAT-solvers. SOQE provides an exemplary framework for studying the dichotomy of expressivity versus complexity.
The first Workshop on Second-order Quantifier Elimination and Related Topics (SOQE 2017) was held in the International Center for Computational Logic (ICCL) at Technische Universität Dresden in Dresden, Germany, during 6-8 December 2017. The workshop for the first time brought together researchers working on SOQE and related topics in a dedicated event. Its program included nine invited talks, two tutorials and nine research talks, acquired with an open call for submissions of original research, adaptions of relevant research published elsewhere and discussions of research in progress.
Abstract: Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by second-order quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account.
Abstract: We investigate second-order quantifier elimination for a class of formulas characterized by a restriction on the quantifier prefix: existential predicate quantifiers followed by universal individual quantifiers and a relational matrix. For a given second-order formula of this class a possibly infinite sequence of universal first-order formulas that have increasing strength and are all entailed by the second-order formula can be constructed. Any first-order consequence of the second-order formula is a consequence of some member of the sequence. The sequence provides a recursive base for the first-order theory of the second-order formula, in the sense investigated by Craig. The restricted formula class allows to derive further properties, for example that the set of those members of the sequence that are equivalent to the second-order formula, or, more generally, have the same first-order consequences, is co-recursively enumerable. Also the set of first-order formulas that entails the second-order formula is co-recursively enumerable. These properties are proven with formula-based tools used in automated deduction, such as domain closure axioms, eliminating individual quantifiers by ground expansion, predicate quantifier elimination with Ackermann's Lemma, Craig interpolation and decidability of the Bernays-Schönfinkel-Ramsey class.
Abstract: For relational monadic formulas (the Löwenheim class) second-order quantifier elimination, which is closely related to computation of uniform interpolants, forgetting and projection, always succeeds. The decidability proof for this class by Behmann from 1922 explicitly proceeds by elimination with equivalence preserving formula rewriting. We reconstruct Behmann's method, relate it to the modern DLS elimination algorithm and show some applications where the essential monadicity becomes apparent only at second sight. In particular, deciding 𝓐𝓛𝓒𝓞𝓠 knowledge bases, elimination in DL-Lite knowledge bases, and the justification of the success of elimination methods for Sahlqvist formulas.
Abstract: For relational monadic formulas (the Löwenheim class) second-order quantifier elimination, which is closely related to computation of uniform interpolants, projection and forgetting – operations that currently receive much attention in knowledge processing – always succeeds. The decidability proof for this class by Heinrich Behmann from 1922 explicitly proceeds by elimination with equivalence preserving formula rewriting. Here we reconstruct the results from Behmann's publication in detail and discuss related issues that are relevant in the context of modern approaches to second-order quantifier elimination in computational logic. In addition, an extensive documentation of the letters and manuscripts in Behmann's bequest that concern second-order quantifier elimination is given, including a commented register and English abstracts of the German sources with focus on technical material. In the late 1920s Behmann attempted to develop an elimination-based decision method for formulas with predicates whose arity is larger than one. His manuscripts and the correspondence with Wilhelm Ackermann show technical aspects that are still of interest today and give insight into the genesis of Ackermann's landmark paper Untersuchungen über das Eliminationsproblem der mathematischen Logik from 1935, which laid the foundation of the two prevailing modern approaches to second-order quantifier elimination.
Remark: This is a 100-pages report centered around those works by Behmann that concern second-order quantifier elimination. It includes relevant extracts (supplied with English translations) and English summaries of so far unpublished German manuscripts by Behmann and of his correspondence with Ackermann and others that are archived in Behmann's Nachlass. The material is accompanied by an extensive technical and historial discussion. Some of the material seems also to be relevant in a broader context of the history of methods of computational logic and in the specific context of investigating the early history of resolution.
Abstract: We present some preliminary results which should be useful in the development of second-order quantifier elimination methods for which success can be shown based on semantic properties of the input formulas. The basic tools are Tarski's characterization of definability as validity and Craig's interpolation to compute definienda, which succeeds in case of definability.
Abstract: We develop foundations for computing Craig-Lyndon interpolants of two given formulas with first-order theorem provers that construct clausal tableaux. Provers that can be understood in this way include efficient machine-oriented systems based on calculi of two families: goal-oriented such as model elimination and the connection method, and bottom-up such as the hypertableau calculus. We present the first interpolation method for first-order proofs represented by closed tableaux that proceeds in two stages, similar to known interpolation methods for resolution proofs. The first stage is an induction on the tableau structure, which is sufficient to compute propositional interpolants. We show that this can linearly simulate different prominent propositional interpolation methods that operate by an induction on a resolution deduction tree. In the second stage, interpolant lifting, quantified variables that replace certain terms (constants and compound terms) by variables are introduced. We justify the correctness of interpolant lifting (for the case without built-in equality) abstractly on the basis of Herbrand's theorem and for a different characterization of the formulas to be lifted than in the literature. In addition, we discuss various subtle aspects that are relevant for the investigation and practical realization of first-order interpolation based on clausal tableaux.
Abstract: Access interpolation is a recent form of interpolation for formulas with relativized quantifiers targeted at applications in query reformulation and specified in the constructive framework of analytic tableaux. We transfer this here to clausal tableaux. Relativized quantification upon subformulas seems incompatible with lifting techniques that only introduce a global quantifier prefix. We thus follow a different approach for access interpolation: A structure preserving clausification leads to clausal ground tableaux that can be computed by automated first-order provers and, in a postprocessing step, can be restructured such that in essence the interpolant extraction from analytic tableaux becomes applicable.
Abstract: The PIE system aims at providing an environment for creating complex applications of automated first-order theorem proving techniques. It is embedded in Prolog. Beyond actual proving tasks, also interpolation and second-order quantifier elimination are supported. A macro feature and a LaTeX formula pretty-printer facilitate the construction of elaborate formalizations from small, understandable and documented units. For use with interpolation and elimination, preprocessing operations allow to preserve the semantics of chosen predicates. The system comes with a built-in default prover that can compute interpolants.
An application of PIE: A new generalization of parallel predicate circumscription, compared with original parallel predicate circumscription and with domain circumscription.
KBSET is a prototype system resulting from an interdisciplinary collaboration with the objective of investigating possibilities to support of scholarly editing (Editionswissenschaft). The system will applied in a large such project, a comprehensive scholarly edition of the correspondence of Swiss polymath Johann Georg Sulzer (1720-1779), coordinated at Martin-Luther-Universität Halle-Wittenberg. Originally, KBSET has been developed independently of the “Second-Order Approach” project. However, since some of the issues addressed in the project, in particular view-based access to large knowledge bases, turned out quite relevant for such a system, it has been decided to use KBSET as a platform for experiments with the logic-based techniques developed in the project in an actual application context with users from another scientific discipline. Other envisaged areas of for experiments in the context of scholarly editing and in Digital Humanities in general concern the combination of specifications provided by users (editors), fragments of natural language text, and formalized knowledge, which requires various forms of extraction, combination, and completion of pieces of information. From a logic point of view, key techniques there are circumscription and logic-based approaches to modularization, both of which can be formally based on second-order quantifier elimination.
Abstract: We investigate possibilities to utilize techniques of computational logic for scholarly editing. Semantic Web technology makes large relevant knowledge bases such as GND, GeoNames, Yago and DBpedia available in form of logic formulas, but there are several further roles of logic languages in machine supported scholarly editing. Important issues that arise will be discussed. This includes integrating automatically inferred and explicitly provided knowledge to incorporate advanced semantics related techniques like named entity recognition. We consider high quality support at all phases involved in the process of scholarly editing, loose coupling of object text and markup by external annotations, representation of epistemic status, and utilizing inferred access patterns for external knowledge bases. An implemented free software prototype system, called KBSET, is described.
Abstract: We investigate possibilities to utilize techniques of computational logic for scholarly editing. Semantic Web technology already makes relevant large knowledge bases available in form of logic formulas. There are several further roles of logic-based reasoning in machine supported scholarly editing. KBSET, a free prototype system, provides a platform for experiments.
This section gives an overview on the background technology of the project. References are suggested that may be useful as introductions to the respective areas for interested non-specialists.
Second-order quantifier elimination means to compute for a given formula with second-order quantification, such as the quantification upon predicate q in
an equivalent first-order formula such as, for the above example
The investigation of second-order quantifier elimination goes back to Boole (1815-1864), Peirce (1839-1914) and Schröder (1841-1902) (see [Cra08]). For propositional formulas and first-order formulas with only unary predicates, general methods that succeed on all inputs can be given, whose developed accompanied early investigations of modern first-order logic by Löwenheim (1878-1957), Skolem (1887-1963) and Behmann (1891-1970). Ackermann (1896-1962) systematically investigated second-order quantifier elimination in the mid-1930s.
In computer science it came to attention in the 1990s through investigations in modal logics and the computation of circumscription.The main algorithms are based on techniques developed by Ackermann. Since the late 2000s it received further interest in the context of logic-based knowledge representation with description logics, where it is the core technique to semantically extract knowledge about a subvocabulary (e.g. the predicates p and r in the above example). Forgetting, uniform interpolation, and projection are variants of second-order quantifier elimination used in that context. Also the notion of conservative extension, the basis of recent semantic-based approaches to knowledge base modularization, can be expressed with second-order quantification and processed with elimination.
The Davis-Putnam method from about 1960 for determining propositional satisfiability, which underlies modern CDCL SAT solving methods, can be understood as based on eliminating propositional variables – or nullary atoms. In modern SAT solvers restricted forms of elimination are directly applied in pre- and in-processing [EB05], complementary to the core control regime of CDCL. Formulas of second-order logic can be useful to express semantic characterizations of processing techniques for propositional formulas, which often preserve second-order equivalences [Phi01].
Automated first-order theorem proving is not just applied to prove theorems in mathematics, but also, probably motivated by the program of formalizing common sense initiated in the late 1950s by McCarthy, as a general knowledge processing mechanism. Deductive databases, generalizations of the well-known relational databases, logic programming and reasoning in description logics for knowledge representation might be viewed as spin-offs of automated theorem proving.
Prolog is a logic programming language based on associating a flow of control with formulas of first-order logic in a restricted syntactic form. Unlike other logic programming approaches such as answer set programming it is a full general-purpose programming language, allowing many techniques known from other programming languages. Like LISP, the first prominent AI programming language, Prolog has a uniformly structured syntax that allows to directly read-in and print-out data structures – such as formulas in some logic or in Prolog itself – and to specify procedures operating on them by means structural recursion with term patterns. The built-in support for unification and depth-first search are useful as building blocks for manipulating first-order formulas.
SWI Prolog is a robust and portable implementation with efficient interfaces to modern Semantic Web formats such as RDF.
In many scenarios it is useful to evaluate a query expressed in some user vocabulary in two steps: it is first rewritten and decomposed to suit available repositories and processing engines, which then perform the actual processing. Many ways to rewrite queries can be considered abstractly as second-order quantifier elimination: The query in the user vocabulary is conjoined with formulas representing background knowledge and mappings to the repositories and processing languages. Second-order quantifier elimination then effects forgetting about the symbols that are not from the repository and processing layer.
Some cases of query rewriting can also be understood as finding a definiens of the given query that is in terms of the target layer vocabulary and with respect to a formula that expresses background knowledge and mappings. Interpolation provides a way to compute such definienda, essentially by a translation to validity. For first-order logic, this technique succeeds in principle whenever a definiens in the target vocabulary exist, analogously to determining validity. By Craig's interpolation theorem [Cra57], if a first-order formula F entails another one G, then there is a third first-order formula H, called an interpolant of the first two, which only contains symbols occurring in both, F and G, is entailed by F and entails G, in symbols:
Such an interpolant can be extracted from a proof that F entails G or, equivalently, that F → G is valid. Based on the computation of interpolants, definientia in a given vocabulary can be computed, that is, for given formulas F and G[x], where [x] indicates that x possibly occurs as free individual variable in the formula, a third formula H[x] in the given vocabulary is found such that
whenever a formula meeting the conditions stated on H[x] exists.
Contact: info@christophwernhard.com