A Polynomial Translation of S4 into the Intuitionistic Propositional Calculus

It is known that both S4 and the Intuitionistic Propositional Calculus
(Int) are PSPACE complete. This guarantees that there is a polynomial
translation from each system into the other. A simple translation from Int
into S4, the Gödel-Tarsky translation, is well-known; given a formula A of
Int, one obtains A* by prefixing [] to every subformula.

Friedman and Flagg also have a type of inverse to Godel-Tarsky using
Lindenbaum algebras. However, it is not polynomial. In fact, no simple
polynomial translation from S4 into Int is commonly known.

The aim of this talk will be to present one. It is based on Kripke semantics
and simulates model checking in S4 within the Intuitionistic Propositional
Calculus. This is achieved by presenting a standard form for Kripke frames
which is proven complete for S4, and indexing variables to represent a
location in the given frame.

The translation will have the form A^Int = P(A)->A(0,0), where P(A) is a
conjunction of formulas designed to mimick truth definitions in S4 and
A(0,0) can be read as "A evaluated at the root".

We prove that S4 |- A<=> Int |- A^Int by generating a model disproving
A^Int from one disproving A and vice-versa.

On the Logical Analysis of Proofs Based on First-Order Arguments

In this talk we discuss the potential of a logical analysis of proofs
beyond the mere establishment of constructive bounds and the extraction of
witnesses.

We discuss the analysis of the topological F"urstenberg proof of the
existence of infinitely many primes as an introductory example. We continue
with more advanced examples e.g. considering exponent 7 in earlier proofs of
subcases of Fermat's conjecture.

The method of analysis we employ is first order cut-elimination and the
extraction of suitable Herbrand disjunctions (sometimes in more unusual
forms, e.g. in presence of conservative rules as term induction)

We finally discuss the extent to which this kind of analysis can (should) be
automatized.

Extraction of Algorithms from Non-Effective Cut Elimination Proofs

Extraction of algorithms from non-effective cut elimination proofs
Non-effective cut elimination proof uses Koenig's lemma to
obtain a non-closed branch of a proof-search tree $T_A$ (without cut) for a
formula $A$, if $A$ is not cut-free provable. A partial model
(semi-valuation) corresponding to this branch and verifying $\neg A$ is
extended to a total model for $\neg A$ using arithmetical
comprehension. This contradicts soundness, if $A$ is derivable with cut.
The same argument works for Herbrand Theorem. We discuss algorithms
of obtaining cut-free proofs corresponding to this schema and quite
different from complete search through all possible proofs.

A primitive recursive bound for the height of $T_A$ is obtained
depending on a given proof of A with cut.

The simplest case is the cut on an atomic formula $C$ in a Tait-style
calculus:

$A,C,\ A,\neg C \vdash C$. The cut-free proof of $A,C$
provides an $h$ such that all non-closed branches of $T_A$ of the
length $\geq h$ contain $\neg C $. Similar bound $h'$ exists for
$C$. Hence very branch of $T$ of the length $\max{h,h_1}$ conains
$C,\neg C $, hence is an axiom.

Proofs Without Syntax

[Augustus De Morgan, 1868]

Proofs are traditionally syntactic, inductively generated objects. This paper presents an abstract mathematical formulation of propositional logic in which proofs are combinatorial (graph-theoretic), rather than syntactic. The paper defines a *combinatorial proof* of a proposition P as a graph homomorphism h : G -> G(P), where G(P) is a graph associated with P, and G is a coloured graph. The main theorem is soundness and completeness: P is true iff there exists a combinatorial proof h : G -> G(P).

To appear in Annals of Mathematics.

http://boole.stanford.edu/~dominic/papers/pws/

Modularity in Inference Systems

Decision procedures for the validity quantifier-free
formulas involving theories such as term equality, linear arithmetic,
lists, and arrays, have a wide range of applications. Methods
for combining theory-specific decision procedures go back to Nelson
and Oppen's seminal work. We present an abstract modular framework
for constructing ground decision procedures for multiple theories by
composing modules that are specific to the individual theories. Each
inference module can be independently refined based on theory-specific
assumptions so that it is possible to develop correct implementations
by discharging simple, incremental proof obligations. We show how a
number of existing combination methods can be formally related through
refinement.

(Joint work with Harald Ganzinger and Harald Ruess.)

Axiomatizing Einstein's Mice: Solving Logic Problems with ATP

Despite well-known difficulties, the quest for an Ars Universalis does not die. 'Einstein's Mice' (although of very doubtful attribution) is one of a class of traditional logic puzzles. We use it to illustrate techniques in automated theorem proving with Tau, a practical (web-deployed) and extensible hybrid theorem prover for first-order predicate calculus with identity. We use Tau to solve 'Einstein's Mice' and similar logic puzzles, and to represent mathematical theories. We conclude with a plea for a 'common logic' (FOL-based) approach to ATP, and report on a potential framework - an extensible syntax and semantics for such work.

Tau is flexible and user-configurable, accepts the KIF Language, is implemented in Java, and has multiple user interfaces, including web-based and command-line. Tau combines rule-based rewriting strategies with Model Elimination, uses Brand's Modification
Method to implement identity, and accepts user-configurable heuristic search to speed the search for proofs. Tau optionally implements mathematical induction. Formulas are input and output in KIF or infix FOPC, and other external formats
can be added. Tau can run on any system for which a current Java Virtual Machine is available.