The paradoxes of `self-reference' (featuring semantic notions such as truth, reference, properties and predication, as well as non-semantic ones such as set, class or number) are known to exploit a few critical logical moves. Non-classical solutions to these paradoxes reject one or other of these moves. For example, Paraconsistent/Dialetheist solutions reject the inference from a given contradiction to an arbitrary conclusion. Contraction-free and ``gappy'' solutions reject excluded middle and the inference of contraction (from A->(A->B) to infer A->B).
In this paper, I explore the cost of non-classical accounts of the paradoxes. In particular, I show that the paradoxes strike at the heart of our logic of propositions, and that a non-classical theory need be non-classical in more than just the logic of negation and implication. In particular, I show that given
In particular, the first-order language L of positively constructed formulas (PSFs) is defined. The main elements of PSFs are type quantifiers (TQs): "for all X s.t. A the subformula ... holds", "there exists Y s.t. B and the subformula ... hold". Here X,Y,... are some sets (may be empty) of individual variables; A,B,... are conjuncts, and each conjunct is either a finite set (may be empty) of atoms or F. The F is analog of False: A is a subset of F, for each conjunct A. The empty conjunct means True. Each leaf of a tree-structure of PCF is an existential TQ and its subformula "..." is empty. PSF is formed from such leaves only by means of TQs and connectives &,V.
The language L is complete w.r.t. expressiveness of the language of the classical predicate calculus. A new logical calculus J is proposed with complete strategy of AP. J contains only one axiom and one inference rule. They are defined just in terms of PCFs, i.e. without preliminary standartization (including scolemization like in resolution method). Our AP method has the main merits of the resolution method, and e.g. it is well machine-oriented. Moreover, it is also easy applicable for human being. It is very important that the AP method preserves the global structure of first-definable knowledge (e.g., does not eliminate the quantifiers), does not generate redundant variety of terms due to the absence of scolemization, has high compactness of knowledge representation and higher compatibility with heuristics in applications. Some results on constructive semantics of L and J are also presented.
Since many applications are characterized by restricted resources of time, memory, information, we propose some method of AH. It is based on solving logical equations in L and on the AP method. For Horn-like PCFs some cases of generating of necessary and sufficient conditions of their provability are described. In general, the AH method allows either to transform unprovable formula in provable one or to accelerate the proving for provable formulas. There is some relation of the AH method with inductive programming (e.g., inverting resolution, S.H.Muggleton and W.Buntine, 1988), induction theorem proving (namely, generalization procedures, R.S.Boyer and J.S.Moore, 1988, B.Hummel, 1990), knowledge discovery (e.g., scientific theory formation, D.B.Lenat, 1981), transformation of unprovable formulas (G.I.Davydov, 1967).
Consider formulas constructed from propositional variables and negated variables using a total of L binary AND and OR connectives. Let B(n,L) denote the number of distinct Boolean functions of a given collection of n variables, which can be defined by such formulas. Since a given function may be defined by many different formulas, a challenging question is: What is B(n,L)?
Joint work with Petr Savicky has provided a reasonably good estimate for B(n,L) which applies over a large range of values of L. This study was initially motivated by the problem of comparing the computational ability of polynomial size circuits with that of polynomial size formulas.
The talk will concentrate on the ideas, including tree enumeration, which underly the estimate, glossing over many of the technical details. In the process an interesting constant, which seems to have been first noticed by the 19th century logician Ernst Schroder, will make an appearance.
Circumscription is a form of non-monotonic reasoning, introduced as a way of characterizing defaults using second order logic. The consequences of circumscription are those formulas true in the minimal models under a pre-order on models. In the case of domain circumscription the pre-order was the sub-model relation. Formula circumscription is characterized by minimizing a set of formulas---one model is preferred to another model when the extensions of the minimized formulas in the first are subsets of the extensions in the second.
We show that the propositional version of formula circumscription can capture all pre-orders on valuations of finite languages. We consider the question of infinite languages, and give the corresponding representation theorems. We further show that there are natural defaults (inertia in temporal projection), captured by inductive definitions, that cannot be captured by circumscription in the first order case.
Finally, contrary to previous claims, we show that propositional formula circumscription can capture all preferential consequence relations over finite propositional languages, as defined by Kraus et al. Thus, in the finite propositional case, there is no restriction on the kinds of preferential defaults that circumscription can describe.
Tarski gave a general semantics for deductive reasoning: a formula a may be deduced from a set A of formulas iff a holds in all models in which all the elements of A hold.
A more liberal semantics has been considered: a formula a may be deduced from a set A of formulas iff a holds in all of the preferred models in which all the elements of A hold. Shoham proposed that the notion of preferred models be defined by a partial ordering on the models of the underlying language. A less constrained method is described in this paper, based on. a set of natural properties of choice functions. This semantics is here shown to be equivalent to a semantics based on comparing the relative importance of sets of models, by what amounts to a qualitative probability measure. The consequence operations defined by the equivalent semantics are then characterized by a weakening of Tarski's properties in which the monotonicity requirement is replaced by three weaker conditions.
Classical propositional connectives are characterized by natural introduction-elimination rules in a nonmonotonic setting. Even in the nonmonotonic setting, one obtains classical propositional logic.