Traditional approaches towards obtaining decidable
fragments of first-order logic involve restricting
quantifier alternation, number of variables, or
patterns of quantification. I will present some
recently proposed decidable fragments that are
obtained by restricting the use of negation. In
particular, "unary negation" logics only allow
negation to be applied to subformulas with at most
one free variable, and "guarded negation" logics only
allow negation to be applied to subformulas whose
free variables are guarded by an atomic relation.
The guarded-negation fragment of first-order logic,
and of least fixed-point logic, subsume many known
decidable logical formalisms (such as modal logic,
the modal mu-calculus, the guarded fragment,
conjunctive queries, and monadic Datalog). Moreover,
they turn out to be well-behaved, both from a
computational and from a model-theoretic point of
view.

Based on joint work with Luc Segoufin, Vince Barany,
and Martin Otto (STACS 2011, ICALP 2011, VLDB 2012).

In this talk, I will first recount some of the well-known history of the development of the Church-Turing Thesis and examine some ideas about what counts as evidence for the Thesis as well as catalog a few early reactions. Following this survey, I will review and compare two relatively recent axiomatic approaches to computability and their impact on the Thesis, one by Sieg and another by Dershowitz and Gurevich.

Continuing the theme of Alan Turing's accomplishments, in this talk we broadly survey some interesting historical points and recent developments concerning the Turing Test. On the historical side, we briefly consider the historical precursors of the test, the origin of the term "Turing Test" and some of the content of Turing's famous 1950 paper. We then turn to the ongoing debate. First we review some very well-known objections to the test. On the one hand we consider objections to the claim that a thinking machine is possible at all: namely, Searle's Chinese Room thought experiment and the so-called Mathematical Objection. On the other, we consider objections to the claim that the test is a reasonable test for intelligence, namely the Lovelace Objection and the methodological objections of Ford and Hayes. Finally, we sketch some recent reactions to these objections (from papers written in the last decade), with a view to illustrating that the issues raised by the Turing Test have not been settled and visionary work is still being produced. In particular, we consider Rapaport's more recent reply to Searle's Chinese Room; Abramson's contention that Turing's account of the Mathematical Objection is frequently misunderstood; the contention of Bringsjord et al. that the Lovelace Objection indicates a better, alternative test for intelligence; and Moor's account of the continuing importance of the Turing Test for AI.

The use of cuts in formal sequent calculus proofs corresponds to the use of lemmas in mathematical proofs. Therefore, algorithmic methods for the elimination and introduction of cuts might allow us to automate the process of simplifying and structuring proofs. In this talk, I will present the CERES (cut-elimination by resolution) method, which works by extracting an unsatisfiable set of clauses from a proof with cuts and refuting this clause set by resolution. The obtained refutation then serves as a skeleton for a proof with only atomic cuts. Afterwards, I will show how to modify CERES in order to obtain a method of cut-introduction by resolution: CIRES.

The idea of enumeration operators originally introduced by Myhill & Shepherdson and by Friedberg & Rogers can be turned into a modeling of lambda-calculus by developing the theory of an elementary notion of continuous operators on the power set of the integers. The easy construction of the model and the connection with recursion theory and enumeration degrees will be shown. On top of the lambda-calculus it is also possible to make a straight-forward model of Martin-Löf Type Theory. All of this is -- in some sense -- well known, but the elementary nature of the the modeling ought to be exploited more.

This talk summarizes a recent paper in Annals of Pure and Applied Logic. It presents a right-sided formulation M of Gentzen's classical sequent calculus LK, without explicit structural rules for contraction and weakening. A minimality theorem holds for the propositional fragment Mp: any propositional sequent calculus S (within a standard class of right-sided calculi) is complete if and only if S contains Mp (that is, each rule of Mp is derivable in S). Thus one can view M as a minimal complete core of Gentzen's LK.

onstructive Analysis (aka BISH) was introduced by Errett Bishop as a constructive redevelopment of Mathematics in the spirit of the intuitionistic tradition. As such, BISH is based on the BHK (Brouwer-Heyting-Kolmogorov) interpretation of logic, where the notions of `proof' and `algorithm' are central. Constructive Reverse Mathematics (CRM) is a spin-off from Harvey Friedman's `Reverse Mathematics' program where the base theory is based on BISH.

In this talk, we introduce an interpretation of BISH in classical Nonstandard Analysis. The role of `proof' in this interpretation is played by the Transfer Principle of Nonstandard Analysis. The role of `algorithm' is played by `$\Omega$-invariance'. Intuitively, an object is $\Omega$-invariant if it does not depend on the *choice* of infinitesimal used in its definition. Using this new interpretation, we obtain many of the well-known results form CRM. In particular, all non-constructive principles (like LPO, LLPO, MP, etc) are interpreted as Transfer Principles, which are not available in our (nonstandard) base theory.