Sean Walsh (UCLA)

Title: Probabilism in Infinite Dimensions
The aim of this talk is to survey what happens to the traditional Dutch book arguments and accuracy arguments for probabilism in the setting where the Boolean algebra of propositions is infinite. We’ll look at sensitivity of these results to features of the underlying Boolean algebra (its cardinality, completeness, being a measure algebra). And in the case of accuracy, we’ll look at different natural constraints such as permutation invariance, and compactness of the set of valuations.

Pierre Simon (UC Berkeley)

Title: Recent progress on geometric (NIP) theories and applications
Model theory classifies structures according to their combinatorial complexity. For example the field of real numbers is combinatorially much simpler than the field of rational numbers. Intuitively, the former belongs to geometry and the latter to arithmetic. One way to make this distinction precise is provided by the notion of geometric, or NIP, structure. Though defined more than 40 years ago, this class has been studied mainly in the last 10-15 years. More recently, it has found applications to other areas of mathematics, such as non-archimedean geometry, combinatorics and the study of homogeneous structures. After introducing NIP, I will briefly present some of those applications.

Maryanthe Malliaris (University of Chicago)

Title: A new simple theory
The talk will outline a recent construction of a new simple theory due to Malliaris and Shelah, and explain some consequences for the study of complexity in mathematical structures.

Sven Neth (Berkeley)

Title: Generalizing Ramsey’s method
Ramsey (1926) shows how we can use the assumption that an agent maximizes expected utility to infer their utility function and subjective probability function from their observable betting behavior. This allows us to `reverse engineer’ an agent’s utilities and probabilities from their actions. However, the assumption of expected utility maximization restricts the scope of Ramsey’s method.

We show how to generalize Ramsey’s method to agents who maximize risk-weighted expected utility (Buchak 2013). This generalization widens the scope of Ramsey’s method, and is a step to make it more applicable to `real-world agents’. Furthermore, we argue that it sheds light on the nature of subjective probability.

Reid Dale (Berkeley)

Title: Is there a really good definition of mass?
Guided by a desire to eliminate language that refers to unobservable structure from mechanics, Ernst Mach proposed a definition of mass in terms of more directly observable data. A great deal of literature surrounds the question of whether this proposed definition accomplishes its stated goal, or even whether it constitutes a definition. In this talk we aim to bring clarity to this debate by using methods from model theory and from modal logic to classify, reconstruct, and evaluate these arguments. In particular, we exhibit a general construction of first-order modal frames for appropriately presented scientific theories with epistemic constraints. These frames allow us to characterize which properties are “modally definable” in the sense of Bressan.

Moshe Vardi (Rice)

Title: A Logical Revolution
Mathematical logic was developed in an effort to provide formal foundations for mathematics. In this quest, which ultimately failed, logic begat computer science, yielding both computers and theoretical computer science. But then logic turned out to be a disappointment as foundations for computer science, as almost all decision problems in logic are either unsolvable or intractable. Starting from the mid 1970s, however, there has been a quiet revolution in logic in computer science, and problems that are theoretically undecidable or intractable were shown to be quite feasible in practice. This talk describes the rise, fall, and rise of logic in computer science, describing several modern applications of logic to computing, include databases, hardware design, and software engineering.

Carlos Areces (Stanford / Universidad Nacional de Córdoba)

Title: The Definability Problem
Arguably, any attempt to provide a logic L with a formal semantics starts with the definition of a function that, given a suitable structure A for L and a formula φ in L, returns the extension of φ in A. Usually, this extension is a set of tuples built from elements in A. These extensions, also called definable sets, are the elements that will be referred by the formulas of L in a given structure, and in that sense, define the expressivity of L, as the definable sets of A are the only objects that L can “see” and “manipulate”. For that reason, definable sets are one of the central objects studied by Model Theory. It is usually an interesting question to investigate, given a logic L, which are the definable sets of L over a given structure A, or, more concretely, whether a particular set of tuples is a definable set of L over A. This is what I will call the Definability Problem for L over A. In this talk I will discuss the definability problem of a number of logics (both modal and classical) focusing mainly on complexity results and algorithms that can be used to effectively solve the problem.

Stephen Bach (Stanford)

Title: Scaling Up Logical Reasoning in Statistical Machine Learning
In this talk I’ll share some of the latest advances in incorporating logical reasoning into statistical machine learning. In particular, the field of statistical relational learning seeks to learn statistical models of objects connected with richly structured relationships that are often best described using logic-like languages. I’ll describe how one such language, probabilistic soft logic (http://psl.linqs.org), scales up logical reasoning based on a new equivalence result among seemingly distinct convex relaxation techniques for combinatorial optimization. I’ll also describe how this approach connects to fuzzy logic, enabling large-scale probabilistic reasoning over both discrete and continuous data.

Erik-Jan van der Linden (ProcessGold)

Title: Process, Language
Process Mining is a novel technique for automatically deriving process models from event logs. We introduce Process Mining. We briefly discuss possible application to Language Processing.

We present the history of our endeavor, in particular where this concerns industry-university interaction.

Johan van Benthem (Amsterdam, Stanford, Tsinghua)

Title: Interfacing graph games and logic design
Graph games are used in computational logic, argumentation theory and social networks. The standard scenario is travel where players follow existing links. In recent scenarios, players can change the graph. We explore current work on the match of logics and graph games, in between game logics and logic games.

Dominik Klein (Bayreuth)

Title: Worlds far apart: From Kripke Models to Dynamical Systems

In the first part of this talk, we show how to define distances between pointed Kripke models. These distances can be tailored to the modeler’s needs, putting much weight on propositions that are important for the purpose at hand and discounting those that are not. We then explore the resulting topological spaces and characterize compactness and when different metrics lead to the same topology.

In the second part, we show two applications of the metrics defined: First, our metrics allow to classify which sequences of Kripke Models converge in the limit. With this in hand, we can classify when a social protocol reaches its aim and whether this can be reached in finite time, in the limit or not at all. As a second application, we show that product updates with postconditions are continuous in the topology defined. Hence, the space of Kripke models together with iterated product updates forms a discrete time dynamical system. This is joint work with Rasmus Rendsvig.

Yunqi Xue (CUNY)

Title: Modeling and Simulating Social Influence Rules in Different Network Structures
We are interested in revisable and actionable social knowledge/belief that leads to a large group action. Instead of centralized coordination, bottom-up approach is our focus. We explore multiple methods of belief revision in social networks. Such belief revision in groups represents social influence and power to some degree. Both influence from friends and from experts are explained.

We define an intuitive concept of Expected Influence of a group. When different influence sources are suggesting conflicting actions, agents could make strategic decisions by analyzing expected influence of different subgroups. We then show some properties of expected influence in different network structures. We also simulate the strategic influence emerging in clustered-ring network and small-world networks which represents many real world networks.

The simulation results reflect our theoretical predictions in two ways. First, when network is a clustered ring, multiple social powers tend to survive. Second, when the expert is one of the hubs in a small-world network, her social power is invincible.

Robert Bassett (Stanford)

Title: Indiscernability and relation algebra

Aleks Knoks (Maryland)

Title: Conciliatory Reasoning and the Problem of Self-Defeat
You’ve pondered on some complicated question for a long time, finally arriving at a well-reasoned view on the matter. Then you learn that a colleague, who is at least as informed and gifted as you are, holds a view that directly opposes yours. Should this fact make you at least a little less confident that your take on the issue is correct? According to conciliatory views on disagreement, it should indeed. Notwithstanding their intuitive appeal, conciliatory views are said to run into insuperable problem when applied to themselves, or when attempting to answer the question of what should one do in the context of disagreement about disagreement. The goal of my talk is to get clearer on this problem and go some way towards solving it. Drawing on work from Logical Artificial Intelligence, I define an ideal conciliatory reasoner: a simple and mathematically precise model that can be used to get traction on the question of what is the correct conciliatory response to any given situation. I use the reasoner to formulate the problem in a perspicuous way, resolve one part of it, as well as assess the solutions that have been proposed in the literature. My overall conclusion is that the problem is indeed genuine and that any conciliatonist intent to stick to her view has but one route of retreat, a route that marks a departure from the orthodox perspective on epistemic rationality.

Patrick Grim (SUNY Stony Brook)

Title: Paradigm Shifts as Cascades on Conceptual Networks

This is a work in progress that uses computational simulation in the general spirit of dynamic logic to link network theory with classical philosophy of science. Popper’s theoretical frameworks and Kuhn’s paradigms can be modelled as networks of linked propositions. Popperian falsification and Kuhnian paradigm shifts then appear as information cascades across conceptual networks. The results show dramatic differences in the dynamics of contrasting approaches and offer new prospects for understanding scientific change in terms of self- organized criticality. (Authors: Patrick Grim, Joshua Kavner, Lloyd Shatkin, Manjari Trivedi & Tianji Cong.)

Konstantin Genin (CMU)

Title: Simplicity and Scientific Progress
A major goal of twentieth-century philosophy of science was to show how science could make progress toward the truth even if, at any moment, our best theories are false. To that end, Popper [1962] and others tried to develop a theory of truthlikeness, hoping to prove that theories get closer to the truth over time. That program encountered several notable setbacks. I propose that the locus of investigation be shifted to scientific methods, rather than scientific theories. Say that a method is progressive if, no matter which theory is true, the objective chance that the method outputs the true theory is strictly increasing with sample size. In other words: the more data the scientist collects, the more likely their method is to output the true theory. Although progressiveness is not always feasible, it should be our regulative ideal. Say that a method is α-progressive if, no matter which theory is true, the chance that it outputs the true theory never decreases by more than α as the sample size grows. This property ensures that collecting more data cannot set your method back too badly. Surprisingly, many standard statistical methods fail to satisfy this weak desideratum. In my dissertation, I prove that, for many problems, there exists an α-progressive method for every α > 0. Furthermore, every α-progressive method must obey a probabilistic version of Ockham’s razor.

Becky Morris (Stanford)

Title: Pólya, Proofs and Planning Agency
Pólya (1949) gave an example of a proof that is perfectly correct, yet deeply unsatisfying, due to the existence of a “deus ex machina” step. He suggested that deus ex machina steps are problematic because readers cannot grasp their “appropriateness”, i.e., they cannot grasp how such steps are “connected with the purpose” or how they “bring us nearer to the goal” (Pólya 1949, 685). His analysis, however, stopped there. In this talk, I will deepen Pólya’s analysis by investigating the proving activity that corresponds to a proof, that is, by putting the proving agent back into the picture. In particular, I will show how Bratman’s theory of planning agency can be used to provide a precise account of how a step may be “connected with the purpose” or “bring us nearer to the goal” (Pólya 1949, 685). (This talk is joint work with Yacin Hamami.)

Eric Swanson (Michigan)

Title: Three Kinds of Indeterminacy in Normative Modals
This paper develops and defends an approach to normative modals that makes room for three underappreciated kinds of normative indeterminacy. According to my approach, normative gaps and two kinds of normative plurality can cause presuppositions conveyed by normative modals to go unsatisfied. When the relevant presuppositions are not accommodated, indeterminacy results.

Section 1 provides a taxonomy of the kinds of indeterminacy I discuss here, and draws connections to the substantial bodies of work in ethics, metaethics, and philosophy of law according to which normative indeterminacy is common. Section 2 considers canonical approaches to normative modals, including Kratzer / Lewis / Veltman style ordering and premise semantics, dynamic semantics, and default logic. I argue that all these approaches are committed to more normative determinacy than is warranted. Section 3 presents my ‘best node’ approach to normative modals, including both strong and weak necessity modals, and explains how the best node approach makes room for the kinds of normative indeterminacy discussed in Section 1. Section 4 extends the best node approach to conditionals, contrasts the resulting extension with some similar approaches in the literature, and offers a principled explanation of negative polarity item (NPI) licensing in the antecedents of conditionals.