Logic Lunch Abstracts Spring 2003

Aarati Parmar (Stanford)
A Logical Measure of Progress for Planning

Planning is the task of finding a sequence of actions to achieve a goal state from an initial one. It is a central problem of AI. The most successful kinds of planners to date are those that employ heuristic search. In this paradigm, the problem of planning is reduced to a search over sequences of actions, where the search is guided by a numerical function which estimates the distance to the goal state.

We argue that this numerical function is a poor candidate for guiding the construction of some plans. Specifically, generating plans for everyday tasks (such as running errands or doing housework) require little heuristic search; there is usually an obvious measure of progress that indicates the next logical action that will get one closer to achieving one's goals. Furthermore, this measure of progress is not a number indicating the number of steps left to the goal. Instead, it is some concept indicating which subgoals need to be completed next.

In this talk, we present a logical formalization of this measure of progress, and show how it can be used to solve planning problems, entirely without search. There are different flavors of measures of progress, and we present these different kinds as well.


John McCarthy (Stanford)
First order theories of individual concepts and propositions

http://www-formal.stanford.edu/jmc/concepts.html

The paper was published in Machine Intelligence 9, 1979.

We discuss first order theories in which individual concepts and propositions are admitted as mathematical objects along with the things they denote. This allows very straightforward formalizations of knowledge, belief, wanting, and necessity in ordinary first order logic without modal operators. Applications are given in philosophy and in artificial intelligence. We do not treat general concepts, and we do not present any full axiomatizations but rather show how various facts can be expressed.


Darko Sarenac (Stanford)
Modal Logics for Product Topologies

Joint work with Johan van Benthem and Guram Bezhanishvili

Arguably the most basic logic for spatial reasoning is S4 interpreted over topological spaces. Our primary goal in this paper is to study a minimal augmentation of this logic. We will study multimodal languages of products of topologies where the modality of each component of the product is preserved into the product. This work generalizes some of the results of D.M. Gabbay and V.B. Shehtman on products of modal logics to the topological setting.

Thus in the most general case, each component modality will have S4 as a complete axiomatization. The interesting question, of course, is how the modalities of various dimensions interact. The expansion of the language enables us to `track dimensions' in the product and this in turn will add some expressive power to the language.


Yiannis N. Moschovakis (UCLA and University of Athens)
Is the Euclidean algorithm optimal among its peers?

In the first (main) part of this talk, I will describe recent joint work with \textbf{Lou van den Dries} in \textit{arithmetic complexity}, including the following result: \bigskip

\textbf{Theorem} {\it If $\alpha$ is any recursive program from $+, -$ and division with remainder which decides co-primeness for pairs of numbers, and if $c_\alpha(a,b)$ is the (strict, paralell, time-) complexity of $\alpha$, then for all co-prime $a,b$, \[ \Big|\frac{a}{b}-\sqrt{2}\Big|<\frac{1}{b^2} \implies c_\alpha(a,b)\geq 8\log\log(b). \] } It follows that \textit{any recursive program which computes the greatest common divisor $\gcd(a,b)$ from $+,-$ and division with remainder requires at least $8\log\log b$ steps on infinitely many inputs}, which is one $\log$ away from a natural formulation of the \textbf{optimality of the Euclidean algorithm}; this remains open. \bigskip

Here, \textit{recursive programs from} given operations are relativized, McCarthy-style recursive definitions which ``spend'' one time unit to call a given operation. In the second part of the talk, I will argue (briefly) that the proofs of these results support the proposal to \textit{define first order algorithms} using their natural expression by relativized McCarthy-style recursive programs.


Johan van Benthem (Stanford)
The Categorial Fine-Structure of Natural Language

Categorial grammar analyzes linguistic syntax and semantics in terms of type theory and lambda calculus. A major attraction of this approach is its unifying power, as its basic function/argument structures occur across the foundations of mathematics, language and computation. This talk considers, in a very light example-based manner, where this elegant logical paradigm stands when confronted with the wear and tear of reality.

Starting from a brief history of the Lambek tradition since the 1980s, we touch upon three issues: (a) the fit of the lambda calculus engine to characteristic semantic structures in natural language, (b) the coexistence of the original type-theoretic and more recent modal interpretations of categorial logics, and (c) the place of categorial grammars in the total architecture of natural language, which seems to involve mixtures of interpretation and inference.

References

  1. "Language in Action", Elsevier & MIT Press, 1995.
  2. 'The Categorial Fine-Structure of Natural Language', manuscript, ILLC Amsterdam, January 2003.

Ben Escoto (Stanford)
Ockham's Razor and Kolmogorov complexity theory

In a recent textbook, Vitanyi and Li argue that Kolmogorov complexity theory can

  1. define simplicity in a relatively language neutral way, so that an object's simplicity is an intrinsic feature of it, and
  2. show that simple theories predict more accurately than complex theories.
In my talk I argue against these claims.

Isidora Stojanovic (Stanford)
Referentialist and Descriptivist Approaches to Indexicals: Towards a Compromise

Indexicals can be used both as devices of direct reference and to convey descriptive information. How can a unified account of the semantics of indexicals and their role in cognition reconcile these conflicting features? After a brief look at the motivations and the shortcomings of the dominant, Kaplanian view, I will present a descriptive account of indexicals that borrows tools from dynamic logic, and show how it can account for the referentialist insights.


Guram Bezhanishvili (New Mexico State University)
Closure Algebras

I'll discuss some new results in the theory of closure algebras with the main emphasis on the connection with topology and modal logic.


Makoto Kanazawa (University of Tokyo)
Computing Interpolants in Implicational Logics

Motivated by a computational model of acquisition of word meanings (Siskind 1996, Kanazawa 2001), I consider the following problem in the simply typed lambda calculus: Given a normal term T[x1,...,xm,y1,...,yn] with free variables x1,...,xm,y1,...,yn, find a term S[x1,...,xm] and another term P[z,y1,...,yn] such that P[S[x1,...,xm],y1,...,yn] beta-reduces to T[x1,...,xm,y1,...,yn]. There are always infinitely many solutions to this problem, but most of them are unnecessarily complex. I focus on those solutions such that the common type of S and z gives an interpolant to the `split sequent' A1,...,Am; B1,...,Bn => C, where A1,...,Am are the types of x1,...,xm, B1,...,Bn are the types of y1,...,yn, and C is the type of T. Existing syntactical methods for proving the interpolation theorem due to Maehara and Prawitz (adapted to implicational logics) find some such solutions, but leave out other, arguably more interesting, ones. In this talk, I present a new algorithm for computing interpolants in the implicational intuitionistic logic and its substructural fragments that, like Prawitz', works on natural deductions rather than sequent derivations, and that, unlike existing methods, always finds a `strongest' interpolant under a certain restricted but reasonable notion of what counts as `interpolants'.


Last modified: Mon Jun 23 14:16:13 PDT 2003