Today is Day 4 of the colloquium. Yesterday was a day of rest, which for me involved a visit to the Byzantine and Christian Museum, in the morning, a lazy afternoon restructuring the order things in my ESSLLI course while staying in the shade, and visiting the Herakleidon Museum which featured an exhibit of Escher prints (the detail in some of the woodcuts was beyond belief), and then the obligatory visit to the Acroplolis as it got slightly cooler. Photographic evidence is found over there.

Today features lots of philosophical logic and proof theory. I’ll post comments as we go, battery life and concentration permitting.

*First up*, Michael Sheard, on “A transactional approach to the logic of truth”. The topic was theories of truth, involving consistent reductions of the class of *T* biconditionals *T*<*A*> iff *A*. The liar paradox causes problems, so Michael gave a nice overview of the kinds of consistent thoeries of truth that weaken the assumptions to restore consistency. He argued for what he called a “transactional approach,” to thinking about the issue which – sensibly, in my view – focusses on why one might be interested in using the concept of truth, in communication between agents, and in evaluation of what those agents say. One nice thing in the talk was that he gave a very nice overview of the kinds of problems afflicting different truth theories (such as ω-inconsistency or non-conservativeness over the basic theory). As I actually *like* some of the theories which are prone to these phenomena, I need to think about this some more.

*Second*, Helmut Schwichtenberg. He gave an overview of his work on minimal logic for representing computational content. The idea is a straightforward one: it’s been long known that λ terms do a good job of encoding proofs in minimal implicational logic. Other terms can be use to encode proof structure of conjunction, disjunction, quantifiers, etc. Helmut has been working on a system in which natural deduction in minimal logic can be used to represent proofs, and terms in the system encode proofs in a handy fashion, and these proofs encode *programs*. (The fact that classical proofs can be embedded in proofs in minimal logic means that even classical proofs can construct programs too.) That much I was familiar with. The new and interesting material for me was Helmut’s technique for associating each formula *A* with a *type* τ(*A*), representing the *computational content* of *A*. The idea is that atomic mathematical statements (identities between ground terms, for example) have no computational content, but that other logical constructions, such as existential or universal quantification *introduce* computational content. The idea then is that we keep track of computational content as we prove something, and the computational content gives us a handle on the upshot for computation of each statement. I did not get all of the details, but a simple google search dug up this very recent account of the technique.

*Third*, Peter Aczel, on Constructive Set Theory. This was the first of two introductory talks on constuctive set theory, and in particular the system CZF, of constructive ZF set theory, which differs from the other system of intuitionistic set theory IZF by avoiding impredicativity by having weak versions of the separation axiom, and in which the powerset axiom is avoided completely in favour of conditions which generate sets of *functions*.

Avoiding the powerset axiom important in non-classical set theories, because there are *very many* subsets of very simple sets. In particular, the singleton set 1 = {0} has very many subsets: it has one for each *truth value*, so much so that the power *class* of the singleton, {x | x is a subset of 1}, which we can call Ω may well be very *big*. You can see this, because it is straightforward to prove, for any statement φ that there is a set {x : x = 0 and φ}, and then, this set is a subset of 1, and it contains 0 *to the degree to which φ is true*. This is very cute, and it makes set theory on anything other than two-valued logic quite different and quite fun.

This means that it’s easier to work with sets of functions, rather than sets of subsets, because the set of functions from a set *a* to the two-element set {0,1} is much more manageable than the power class *P*(*a*).

Peter also gave a nice sketch of the many different kinds of constructive mathematics there are, and showed how his own work in constructive set theory fits in. His aim is to provide a set-theoretical vocabulary which is both motivated on the basis of a Martin-Löf type theory, and extensional (hence the talk of *sets* and not *types*) and applicable to the bulk of contemporary mathematics on the other. It’s a laudable goal.

It’s a pity that I won’t hear tomorrow’s class when Peter covers inductive and coinductive definitions (I *really* wanted to hear that) and constructive topology. I guess I’ll have to read this set of notes from 2000.

The *last* two talks I managed to get to were by *Sara Negri* and *Hannes Leitgeb*, they were a great way to end the conference. In the special session on Philosophical Logic orgnaised by Aldo Antonelli, they were interesting new developments on topics I’m actively working on.

First up, *Sara*. She presented a uniform way to add to G3c – the nice sequent calculus for classical logic in which the structural rules for contraction and weakening are not explicit but are admissible – labels and an explicit accessibility relation, in order to treat a whole host of normal modal logics, and for good measure, intermediate and substructural logics as well. The different systems are given by different rules governing the accessibility relation, and the system has nice proof theoretical properties no matter what accessibility conditions you have. In other words, it has exactly the right kind of separation between connective rules on the one hand and generalised ‘structural’ rules on the other. (If we admit the simple rules between statements of accessibility relation under the ‘structural’ heading for a moment – I have a reason for doing this, as you might see later.) The trickiest bit of the paper was the treatment of *provability* logics, which have been hard to treat in proof calculi. Sara manages to get sequent systems for *these* logics too, by changing the rules for necessity. I’ll have to read the paper in detail to see how this works out. Thankfully, the paper will be out in the *Journal of Philosohpical Logic* quite soon.

Best for me is that I’m pretty sure that I have a direct translation between the labelled sequent formulation that Sara uses and the non-labelled structural formulation of a calculus that I’m working on via proofnets. It *seems* very natural when you see how it works. I’m still figuring out the details, but once I have something concrete, I’ll write it up and explain the translation.

Second, *Hannes*. His paper continued the theme of formal theories of truth started by Mike Sheard in his morning talk. Hannes took *necessity* and showed what you need to do to treat it as a predicate of sentences, in the same way that we treat truth. Taking work from his very recent *JPL* paper “What Truth Depends On,” he showed how to extend the consistent treatment of truth to one of necessity (or other normal modal operators). This is not straightforward, since a well-known result due to Montague shows that liar-like paradoxes follow from the weak principles of necessity elimination (*N*φ → φ) and necessitation (if φ is a provable, then so is *N*φ), given that you can form fixed point sentences like the liar. (That’s the down-side between treating necessity as a predicate rather than an operator. The upside is that you get such expressive power: by being able to treat necessity as a predicate you can quantify into the sentential place, and do lots of nice things.)

So, Hannes explained what you should do to recover some decent modal logic in the context in which necessity is treated as a predicate and not an operator. The trick is to see that you do need to restrict the application of the offending rules (necessity elimination and necessitation) to a class of formulas. In this case, Hannes has a nice explanation of what it is to be self-referential, and what it is to be ungrounded (and these clearly differ – there can be ungrounded sentences that aren’t self referential – this is discussed in “What Truth Depends On”). Then, a nice preservation construction is used to show that you can model the operation of necessity on the *grounded* sentences in such a way as to get any normal modal logic you want, restricted to grounded sentences. The construction is very nice, and Hannes gives an *extremely* clear talk.

At this stage of the proceedings, my brain had completely melted, so I went out and had a drink with Aldo, and then we had dinner once the rest of the crew (including Hannes, Sara, Jan, Richard and Giovanna) finished the late session. I managed to reduce my *Trotsky number* (the length of the chain of the form *x* shook hands with *y* shook hands with … shook hands with *Trotsky*) down to three *via* Giovanna, who once shared an office with Jean van Heijenoort, so now my Trotsky number is lower than my Erdös number.

That’s it for my conference blogging. The conference continues for Day 5, but I’m flying off to rejoin family. I’m passing the baton to Richard who has promised to blog some of Day 5’s proceedings. Posts here will resume their usual infrequency, and I’ll enjoy some break before the onslaught in Edinburgh.

← Logic Colloquium 2005 Day 3 | News Archive | Denmark, Oxford, Edinburghâ€¦ →

I’m *Greg Restall*, and this is my personal website. ¶ I am the Shelby Cullom Davis Professor of Philosophy at the University of St Andrews.

- Philosophy Department, The University of St Andrews
- greg@consequently.org
- @consequently@scholar.social on Mastodon.

To receive updates from this site, you can subscribe to the RSS feed of all updates to the site in an RSS feed reader, or follow me on Mastodon at @consequently@scholar.social, where I’ll update you if anything is posted.