Logic Lunch Abstracts

Rick Tieszen (San Jose State)
Aspects of Husserl's Logic

This talk is based on a long paper on "Husserl's Logic" that I am writing for the Handbook of the History and Philosophy of Logic, D.Gabbay and J. Woods (eds.) (Springer). Husserl views logic in a very broad sense as mathesis universalis or as the 'science of all possible sciences'. I'll describe some of the details of this conception. 'Objective' formal logic, for example, is held to be a three-tiered subject in which each level, starting from logical grammar at the bottom, is a condition for the possibility of the next higher level. 'Consistency-logic' lies at the second level and 'truth-logic' lies at the third. Consistency-logic considers purely formal axiom systems and their 'ontological' correlates, which Husserl calls 'manifolds'. Manifold theory is concerned with purely formal ontology. 'Truth-logic' requires more than mere consistency of formal systems. It requires evidence for or intuition of objects, states-of-affairs, and so on. There are also regional ontologies that fall under formal ontology. I'll elaborate on these ideas, try to make them clear, and point out their relevance to various issues in philosophical logic.

Primary References (all works of Husserl): Logical Investigations, Formal and Transcendental Logic, Experience and Judgment, Philosophie der Arithmetik(Husserliana XII), Logik und Allgemeine Wissenschaftstheorie (Husserliana XXX), Einleitung in die Logik und Erkenntnistheorie (Husserliana XIX), Studien zur Arithmetik und Geometrie (Husserliana XXI).


Johan van Benthem
Logics of Space

Modal logics of space date back to Tarski's work on topology. But modal structures arise all across geometry and related areas, such as linear algebra. We will show a few of the possibilities, and mention some recent results obtained with Marco Aiello, Guram Bezhanishvili, and Mai Gehrke. A first survey of the area is 'A Modal Walk Through Space':


Michael Beeson (UC San Jose)
Double Negation Elimination

We consider various propositional logics in the implication-and-negation fragment, based on the rule of inference condensed detachment, which combines modus ponens and substitution into a single inference rule. We say that such a logic L admits double negation elimination if whenever it proves a theorem B, there is a proof of B which contains no double negations not occurring in B itself.

In particular, if B contains no double negations, there is a double-negation-free proof of B. We give general conditions on a logic L under which it admits double negation elimination. We verify (with computer assistance) that these conditions are satisfied by Lukasiewicz's propositional logic L1--L3, and by ``infinite-valued'' logic axiomatized by A1--A4. Moreover,these systems also admit "strong double negation elimination". This means that if B* is obtained from B by erasing some double negations in B, and B is a theorem of L, then B* is also a theorem of L, and can be proved without using double negations not occurring in B*. We then take up the case of intuitionistic logic H, which does not satisfy strong double negation elimination, and show that it nevertheless does satisfy double negation elimination. The proof for H involves a translation to Gentzen sequent calculus and back.

(Joint work with Larry Wos and Robert Veroff)


John McCarthy
Useful Counterfactuals

A prototypical useful counterfactual conditional sentence is

"If another car had come over the hill when you passed, there would have been a head-on collision."

This counterfactual, like other useful counterfactuals has non-counterfactual consequences, e.g. that passing on hills can cause a collision. We regard such counterfactuals as meaningful in approximate theories covering only a limited set of phenomena. The simplest approximate theories have a Cartesian product structure in which the counterfactual involves change of only one component. Reasoning from counterfactuals is an extension of case-based reasoning, i.e. it is possible to reason from a case that didn't happen.


Phokion G. Kolaitis (UCSC)
On the Complexity of Model Checking and Inference in Minimal Models

Every logical formalism gives rise to two fundamental algorithmic problems: model checking and inference. In propositional logic, the model checking problem is polynomial-time solvable, while the inference problem is coNP-complete. In propositional circumscription, however, these problems have higher computational complexity, namely the model checking problem is coNP-complete, while the inference problem is complete for the second level of the polynomial hierarchy PH.

In this talk, we survey recent results on the computational complexity of restricted cases of these problems in the context of Schaefer's framework of generalized satisfiability problems. These results establish dichotomies in the complexity of the model checking problem and the inference problem for propositional circumscription. They yield a complete classification of the ``hard" and the ``easier" cases of these problems and also provide efficiently checkable criteria that tell apart the ``hard" cases from the ``easier" ones. This is joint work with Lefteris M. Kirousis of the University of Patras.


Geoffrey K. Pullum (UC Santa Cruz)
Model Theoretic Syntax: An Elementary Overview

Most formal work on syntax, whether for natural languages or logical ones, stems from the work of Emil Post on rewriting systems and recursively enumerable sets, which was motivated by the goal of mathematicizing proof theory. There is an alternative -- very much a minority approach at the moment -- that can be used to describe natural language syntax in a very different way. It employs the tools and methods of model theory, and has a number of deep and quite appealing consequences that depart much more radically from previous theorizing than most linguists have realized. It offers a new domain of applications for logic, and shows some promise of drawing the work of logicians and linguists closer together than they have been in recent decades. I outline the approach in a fairly elementary way, and draw from it a key implication for linguistics. My main thesis hinges on a basic metalogical generalization that I take to be uncontroversial: Let L be a logical language interpreted over a class M of models. No matter what the character of L, and no matter what the models in M may be like, there is at least one kind of statement that cannot be made in L under this interpretation: we cannot state anything that involves quantification over or comparison between elements of M. I connect this observation to a negative conclusion about a move in linguistics that seems to be increasingly fashionable but (I submit) should not be. This move involves assessment of well-formedness in a way that involves comparison of the structures of different natural language expressions. Once known as "transderivational constraints" and dismissed as unimplementable, statements of this kind have recently resurfaced in MIT work under the guise of "economy principles". Under the view I explore here, they cannot be legitimate. I believe that the implication here -- that all "economy" analyses are in one way or another misconceived -- should be taken seriously, but I will not develop that thesis at length. Instead, concentrating on the logical side, I try merely to clarify the content of the model-theoretic approach to syntax, and the sense in which it yields the consequence noted for linguistics.


Last modified: Sat Sep 21 14:56:43 PDT 2002