Michael Frank (Stanford)

Title: The pervasive effects of pragmatics in children’s early logical language
Abstract: Logical terms - quantifiers, conjunction, disjunction, and negation - are famously tricky to understand - consider Wason’s classic example, "No head injury is too trivial to ignore." Yet these words are also highly frequent both in speech to children and in children’s own productions. How are these terms acquired and used by children? In some cases (like "no") children are generally quite good, even very early; in other cases (like "some" and "or") children make surprising mistakes much later than we might expect. In this talk, I'll make the argument that children’s use and understanding of logical language is deeply affected by pragmatics. Teasing apart the contributions of semantics and pragmatics to children's use of logical language can provide evidence both on the nature of children’s early pragmatic competence and on the origins of logical thought.

Michael Henry Tessler (Stanford)

Title: Generic language from pragmatic logic
Abstract: Drawing generalizations about categories in the world is difficult because categories themselves are unobservable. It’s not surprising, then, that language provides a simple way to communicate these generalizations (with what's called: generic language). Though talking in generalizations (e.g. Dogs bark; Politicians make promises) is ubiquitous, it is yet unclear what makes such statements true or false. In this talk, I will argue that the core meaning of a generic statement is simple but underspecified, and that general principles of pragmatic communication resolve the meaning in context. I formalize this idea in a probabilistic model of language understanding and find that the model accurately predicts human truth judgments of familiar generic statements. If the model is truly about communicating generalizations, however, then it should extend to other kinds of generalizations. Indeed, I’ll show the same model explains the meaning of habitual sentences (e.g​. Lawrence smokes cigarettes), which convey generalizations about events. This mathematical theory serves as a formal bridge between the generalizations in our heads and the language we use to describe them.

Falk Lieder (UC Berkeley)

Title: Bounded Rationality Revisited
Abstract: Early optimism that classic notions of rationality, like logic, probability, or expected utility theory, could provide a unifying explanation of human intelligence was shattered by the discovery of cognitive biases and intractability results from computational complexity theory. These discoveries reinforced Herbert Simon’s view that the boundedness of people’s cognitive resources necessitates efficient heuristics. Yet, the critical questions how exactly people should think and decide given that their cognitive resources are limited and how human cognition compares to this ideal have remained unanswered. To address these open questions, we leverage the theory of bounded optimality developed in artificial intelligence research to mathematize bounded rationality. We apply this approach to derive optimal cognitive strategies and evaluate their predictions against human performance. In addition to specifying which heuristic is boundedly optimal for a given problem, a complete theory of bounded rationality also has to specify how people should decide when to use which heuristic. To address this open problem, we leveraged rational metareasoning to develop a rational theory of human strategy selection and tested its predictions empirically. Overall, we found that human behavior might be consistent with the idea that people make rational use of their limited cognitive resources.

Melissa Fusco (UC Berkeley, Columbia)

Title: Deontic Disjunction
Abstract: I propose a unified solution to two puzzles: Ross's puzzle (the apparent failure of `Ought phi' to entail `Ought (phi or psi))' and free choice permission (the apparent fact that `May(phi or psi)' entails both `May phi' and `May psi'). I begin with a case from the decision theory literature illustrating the phenomenon of act dependence, where what an agent ought to do depends on what she does. The notion of permissibility distilled from these cases forms the basis for my analysis of permission and obligation.

Hanti Lin (UC Davis)

Title: Conditionals and Actions: A Case Study of Choosing between Logics
Abstract: Imagine two languages that differ from English only in indicative conditionals. Instead of ‘if’, one language contains ‘if*’ and the other contains ‘if**’. Those two connectives have very different meanings, so much so that ‘if*’ has a logic that contains Adams' (1975) logic of conditionals, while ‘if**’ has a logic that does not. One of the two languages might, or might not, be a notational variant of English---this is an empirical issue that will not concern us here. I will address a normative or evaluative issue: How to choose between these two languages? Are there any advantages for using one of the two language instead of the other? I will answer with a new theorem; it says roughly that the language with ‘if*’, which follows Adams' logic of conditionals, has the advantage of helping us guard against a sort of irrational decision-making, while the other language lacks this advantage.

Rachael Briggs (Stanford)

Title: Conditionals in Relevance Logic
Abstract: I explain and motivate the Routley-Meyer semantics for conditionals in relevance logic, which appeals to a three-place accessibility relation. I then discuss potential applications of this semantics for modeling reasoning about causation, obligation, and meaning.

Jennifer Wang (Stanford)

Title: A Primitivist Theory of Modality
Abstract: The primitivist about modality says that there are irreducibly modal features of the world. I show that this does not require adopting the controversial view that modal semantics can only be given a non-realistic interpretation. On the view to be defended, the source of primitive modality is located at the level of properties rather than sentences, propositions, or states of affairs. In particular, incompatibilities between properties provide the basis for a systematic theory of de dicto modality. The view then handles de re modality by introducing a counterpart relation over properties.

John Perry (Stanford, UC Berkeley)

Title: The Great Detour
Abstract: If a=b, how can "F(a)" differ in cognitive significance from "F(b)"? How can one learn something from the first that one cannot learn from the second? In his Begriffsschrift, Frege considered the special case of 'a=a' and 'a=b'. There he adopted a special semantics for identity. In identity sentences, terms stand for themselves, and 'a=b' is true if the terms co-refer. But he saw in Function and Concept, that the problem generalizes, while the Begriffsschrift solution does not. So he developed his theory of Sinn and Bedeutung, which profoundly affected the philosophies of language and mind.

But it was all a mistake. The Begriffsschift solution was on the right track. An improved version generalizes. We could have been spared Gedanken, indirect reference, truth-values as the reference of sentences, the slingshot, and other strange ideas. Frege put us on a great detour, that took us to many exotic sights, but leads to a dead end. Or so I shall argue.

Cameron Freer (Gamalon Labs)

Title: Symmetric probabilistic constructions of countable structures
Abstract: There has long been interest in the use of probabilities within classical logic, and in particular the study of countable structures where relations and formulas are assigned probabilities instead of binary truth values; these are equivalent to probability measures on the space of structures with a fixed underlying set. Symmetric such probabilistic constructions, which do not make use of the order of the underlying set, play an important role.

It is natural to ask which classical structures can arise, almost surely, via a symmetric probabilistic construction. Over the years, several examples, such as the Rado graph and the rational Urysohn space, have been shown to admit such constructions. In joint work with Ackerman and Patel, we characterize those structures that admit such a symmetric construction. We also address related questions, such as what structures admit a unique symmetric construction, and the complexity of such constructions.

Joint work with Nathanael Ackerman, Alex Kruckman, Aleksandra Kwiatkowska, Jaroslav Nešetřil, Rehana Patel, and Jan Reimann.

Maryanthe Malliaris (Chicago)

Title: Model theory and graph theory, via ultrapowers
Abstract: Progress in understanding saturation of ultrapowers has led to productive interactions of model theory and graph theory. The talk will discuss some common themes in this work and will motivate several open questions.

Dominic Hughes (UC Berkeley)

Title: First-order Proofs Without Syntax
Abstract: Proofs of first-order logic are traditionally syntactic, built inductively from symbolic rules. This talk reformulates classical first-order logic (predicate calculus) with proofs which are combinatorial rather than syntactic. A combinatorial proof is defined by graph-theoretic conditions which can be verified easily (linear time complexity).

To be accessible to a broad audience (logicians, philosophers, mathematicians and computer scientists - including undergraduates) the lecture uses many colourful pictures. Technicalities (such as relationships with Gentzen’s sharpened LK Hauptsatz and Herbrand’s theorem) are deferred to the end. The work extends ‘Proofs Without Syntax’ [Annals of Mathematics, 2006], which treated the propositional case.

Joel David Hamkins (CUNY)

Title: Same structure, different truth
Abstract: To what extent does a structure determine its theory of truth? I shall discuss several surprising mathematical results illustrating senses in which it does not, for the satisfaction relation of first-order logic is less absolute than one might have expected. Two models of set theory, for example, can have exactly the same natural numbers and the same arithmetic structure \(\langle\mathbb{N},+,\cdot,0,1,\lt\rangle\), yet disagree on what is true in this structure; they have the same arithmetic, but different theories of arithmetic truth; two models of set theory can have the same natural numbers and a computable linear order in common, yet disagree on whether it is a well-order; two models of set theory can have the same natural numbers and the same reals, yet disagree on projective truth; two models of set theory can have a rank initial segment of the universe \(\langle V_\delta,{\in}\rangle\) in common, yet disagree about whether it is a model of ZFC. These theorems and others can be proved with elementary classical model-theoretic methods, which I shall explain. On the basis of these observations, Ruizhi Yang (Fudan University, Shanghai) and I argue that the definiteness of the theory of truth for a structure, even in the case of arithmetic, cannot be seen as arising solely from the definiteness of the structure itself in which that truth resides, but rather is a higher-order ontological commitment. Commentary can be made at http://jdh.hamkins.org/same-structure-different-truths-stanford-csli-may-2016/.

James Walsh (UC Berkeley)

Title: Extension frames of axiomatic theories
Abstract: The consistent extensions of any first-order theory are naturally ordered by inclusion. We call such structures extension frames. In 1969, Kripke suggested a provability interpretation of modality based on extension frames. The details of Kripke's proposal are complicated and the resulting modal theory is somewhat pathological. We have recently attempted to simplify Kripke's approach without abandoning its spirit. We will discuss our first step in this direction: a classification of the modal logics of extension frames of axiomatic theories.

Burkhard C. Schipper (UC Davis)

Title: Self-confirming games: Unawareness, Discovery, and Equilibrium
Abstract: Equilibrium notions for games with unawareness in the literature (Halpern and Rego, 2012, 2014, Heifetz, Meier, and Schipper, 2013, Feinberg, 2012, Li 2006, Grant and Quiggin, 2013) cannot be interpreted as steady-states of a learning process because players may discover novel actions during play. In this sense, many games with unawareness are "self-destroying" as a player’s representation of the game may change after playing it once. We define discovery processes where at each state there is an extensive-form game with unawareness that together with the players' play determines the transition to possibly another extensive-form games with unawareness in which players are now aware of actions that they have previously discovered. A discovery process is rationalizable if players play extensive-form rationalizable strategies in each game with unawareness. We show that for any game with unawareness there is a rationalizable discovery process that leads to a self-confirming game with an extensive-form rationalizable self-confirming equilibrium. This notion of equilibrium can be interpreted as steady-state of a learning and discovery process.

Paolo Turrini (Imperial)

Title: Backwards Induction with Limited Foresight
Abstract: For classical game theory chess is an uninteresting game. It is a finite extensive game of perfect information that can (therefore) be solved by backwards induction, and that's the end of it.

For artificial intelligence - as well as for human beings - chess is a very interesting game. This because, in practice, humans (and even supercomputers) are not able to correctly assess game positions and decide the best thing to do. In other words, they make mistakes.

What I present in this talk is a model of interactive decision-making in chess-like scenarios, where participants are not able to foresee the consequences of their decisions all the way up to the terminal nodes and need to make a judgement call to evaluate intermediate game positions.

On top of that players can form beliefs about what the other players are able to foresee and how they evaluate it, and all higher order variants thereof (beliefs about beliefs of others, beliefs about beliefs about beliefs of others and so forth).

I will introduce and analyse a solution concept to solve these scenarios, which is a generalisation of classical backwards induction, and where players' local decisions are a best response to the (higher-order) beliefs they hold about the other players' foresight and evaluation criteria. In other words, they play rationally against their opponents' believed weaknesses.

I will also show that the potentially unbounded chain of complex beliefs sustaining this solution concept can be computed using a PTIME algorithm.

*This talk builds upon a line of research started with Davide Grossi
**There is no deep game-theoretic background required to understand what I will be saying. It does help though if you played chess, at least once in your life.

Rohit Parikh (CUNY)

Title: "An Epistemic Generalization of Rationalizability"
Abstract: Savage showed us how to infer an agent's subjective probabilities and utilities from the bets which the agent accepts or rejects. But in a game theoretic situation an agent's beliefs are not just about the world but also about the probable actions of other agents which will depend on their beliefs and utilities. Moreover, it is unlikely that agents know the precise subjective probabilities or cardinal utilities of other agents. An agent is more likely to know something about the preferences of other agents and something about their beliefs. In view of this, the agent is unlikely to have a precise best action which *we* can predict, but is more likely to have a set of "not so good" actions which (we know) the agent will not perform.

Ann may know that Bob prefers chocolate to vanilla to strawberry. She is unlikely to know whether Bob will prefer vanilla ice cream or a 50-50 chance of chocolate and strawberry. So Ann's actions and her beliefs need to be understood in the presence of such partial ignorance. We propose a theory which will let us decide when Ann is being irrational, based on our partial knowledge of her beliefs and preferences, and assuming that Ann is rational, how to infer something about her beliefs and preferences from her actions.

Our principal tool is a generalization of rational behavior in the context of *ordinal* utilities and partial knowledge of the game which the agents are playing.