While I was busy writing my most recent paper, “Proof Terms for Classical Derivations”, I heard that Raymond Smullyan had died at the age of 97. I posted a tweet with a photo of a page from the draft of the paper I was writing at the time, expressing loss at hearing of his death and gratitude for his life.

There are many reasons to love Professor Smullyan. I learned combinatory logic from his delightful puzzle book *To Mock a Mockingbird*, and he was famous for many more puzzle books like that. He was not only bright and sharp, he was also warmly humane. However, the focus of my gratitude was something else. In my tweet, I hinted at one reason why I’m especially grateful for Smullyan’s genius—his deep understanding of proof theory. I am convinced that his analysis of inference rules in the tableaux system for classical logic rewards repeated reflection. (See his *First-Order Logic*, Chapter 2, Section 1 for details.) I’ll try to explain why it’s important and insightful here.

In memory of Raymond Smullyan (1919-2017), with appreciation, fondness, and a sense of loss. pic.twitter.com/g5e54e0eo6

— Greg Restall (@consequently) February 9, 2017

Step back a moment and think about *proof theory*, that branch of logic which concentrates—unlike model theory—on the *positive* definition of the core logical notions of validity, inconsistency, etc. An argument is valid if and only if *there is some* proof from the premises to the conclusion. A set of sentences is inconsistent if and only if *there is some* refutation of (i.e., proof of a contradiction from) that set of sentences. On the contrary, model theoretic approaches define those notions negatively. An argument is valid if and only if *there is no* model satisfying the premises but failing to satisfy the conclusion; a set of sentences is inconsistent if *there is no* model satisfying all of them. For proof theory to be precise, we need to know what counts as a proof. The way this is typically done in different accounts of proof (whether natural deduction, Gentzen’s sequent calculus, or tableaux), there are different rules for each different logical connective or quantifier. In well behaved proof systems, there tend to be two rules for each connective, explaining what you can deduce *from* (for example) a conjunction, and how you could make a deduction *to* a conjunction. The same for a conditional, a disjunction, a negation, a universally quantified statement, and so on.

That makes for a *lot* of different rules.

A *proof* is then is some structured collection of statements, linked together in ways specified by the rules. You demonstrate things *about* proofs typically by showing that the feature you want to prove holds for *basic* proofs (the smallest possible cases), and then you show that if the property holds for a proof, it also holds for a proof you can make out of that one by extending it by a new inference step. If you have \(9\) different rules, then there are \(9\) different cases to check. Worse than that, if you were *mad enough* to try to prove something about what happens when you rearrange proofs by swapping inference steps around, then welcome to the world of combinatorial explosion of cases. If you have \(9\) different kinds of rules, then there are \(9 \times 9 = 81\) different cases you have to consider. There’s something inherently unsatisfying about having to consider \(81\) different cases in a proof. You have the nagging feeling that you’re not looking at this at the right level of complexity. There is no wonder that the insightful and influential proof theorist Jean-Yves Girard complained:

One can see … technical limitations in current proof-theory: The lack in

modularity: in general, neigbouring problems can be attacked by neighbouring methods; but it is only exceptionally that one of the problems will be a corollary of the other … Most of the time, a completely new proof will be necessary (but without any new idea). This renders work in the domain quite long and tedious. For instance, if we prove a cut-elimination theorem for a certain system of rules, and then consider a new system including just a new pair of rules, then it is necessary to make a complete new proof from the beginning. Of course 90% of the two proofs will be identical, but it is rather shocking not to have a reasonable «modular» approach to such a question: a main theorem, to which one could add various «modules» corresponding to various directions. Maybe this is inherent in the subject; one may hope that this only reflects the rather low level of our conceptualization!— Proof Theory and Logical Complexity, pages 15 and 16.

In the case of permutations of rules, the usual proof of a theorem like this would have \(n\times n\) cases where you have a proof system with \(n\) different inference rules. And if you decided to try to extend your result it to a proof system with another \(m\) rules, you not only need to prove the fact all over again for your new rules, you also need to one-by-one add the \(n\times m\) cases of interaction betwen the old rules and your new ones. Ugh.

That’s where Smullyan’s insight comes in. He divided the rules of his tableaux system for classical propositional logic into two kinds. The \(\alpha\) (linear) rules are single-premise single-conclusion rules, while the \(\beta\) (branching) rules infer a conclusion from *two* premises. It turns out that you can prove very many things about rules operating at this level of generality. Many features of rules are shared by *all* \(\alpha\) rules or *all* \(\beta\) rules. And in my paper I was pleased to see that the \(81\) different cases of permutations I had to consider could be simplified into \(3\) different cases. Swapping an \(\alpha\) step around an \(\alpha\) step; a \(\beta\) around a \(\beta\) and an \(\alpha\) around a \(\beta\) (and back). Instead of writing a paper where I considered \(n\) different cases out of \(81\), and leave the rest to the reader, using Smullyan’s insight I could show that any rules of the required two general shapes can be permuted around using the general schemas I formulate. Every case is covered. And what’s more, if you extend the proof system with *other* rules, provided that they are \(\alpha\) or \(\beta\) rules, the results still hold. It’s a much better way to do proof theory. It’s a *modular* proof of a theorem, in just the way that Girard hoped for.

Thanks, Professor Smullyan!

← A New Paper | News Archive | Typesetting Flow Graphs with tikz →

I’m *Greg Restall*, and this is my personal website. I teach philosophy and logic as Professor of Philosophy at the University of Melbourne. ¶ Start at the home page of this site—a compendium of recent additions around here—and go from there to learn more about who I am and what I do. ¶ This is my personal site on the web. Nothing here is *in any way* endorsed by the University of Melbourne.

- School of Historical and Philosophical Studies, The University of Melbourne, Parkville 3010, Australia.
- greg@consequently.org
- keybase.io/consequently, to sign or encrypt a message to send to me privately.
- @consequently on Twitter.
- @consequently on Instagram.

To receive updates from this site, you can subscribe to the RSS feed of all updates to the site in an RSS feed reader, or follow me on Twitter at @consequently, where I’ll update you if anything is posted.