Over the last couple of days on Twitter, I was involved in a thread, kicked off by Dan Piponi, discussing closed set logic—the natural dual of intuitionistic logic in which the law of the excluded middle holds but the law of non-contradiction fails, and which has models in the closed sets of any topological space, as opposed to the open sets, which model intuitionistic logic.
This logic also has a nice sequent calculus in which sequents have one premise (or zero) and multiple conclusions. In the thread I made the claim that this is a natural and beautiful sequent calculus (it is!) but that the structure of the sequents means that the logic doesn’t have a natural conditional. The dual to the conditional (subtraction) can be defined, for which if and only if . But the traditional conditional rules don’t work so well.
I realised, when I thought about it a bit more, that this fact is something I’ve just believed for the last 20 years or so, but I’ve never seen written down, so now is as good as a time, and here is as good as a place as any to explain what I mean.
Consider the conditional rules in the classical sequent calculus. They look something like this (give or take variations in the presentation, all equivalent given the classical structural rules):
Classical sequent rules for the conditional.
If we restrict these rules to multiple conclusion single premise sequents, we get rules which look like these:
Closed set logic sequent rules for the conditional.
You can see an immediate issue with the [R*] rule. The concluding sequent is which tells you when a conditional (with alternate conclusion cases) is derivable from no premises. It does not tell you anything else about when a conditional (with alternate conclusion cases) is derivable from another premise. It does not tell us what to do if we want to derive in any case where the is doing some logical work. The best guidance we get is to ignore the and to hope that we can derive . (In classical logic, that’s fine, because we could stash the premise away as an alternate conclusion among the s in the right hand side, but in an asymmetric sequent calculus like this, that’s not necessarily within our powers.)
The fact that the rules seem too weak to constrain arbitrary sequents of the form gives us a hint that these rules might not be strong enough to actually characterise or uniquely define the connective . And that hint bears out when you attempt to derive uniqueness. Here’s the issue. Imagine that you and I both use rules like these to define a conditional connective. Yours is and mine is . Try to derive the sequent and you’ll see that you get stuck:
Attempted derivation of an identity sequent.
You get stuck at just the point where we’d like to know when follows from other premises, and our rules give us no guidance at all. So, it looks as if our rules are not uniquely characterising in this sequent calculus.
That’s just a suspicion. It’d be nice to have a demonstration of this fact—an explanation of how it is that these rules could be interpreted in different, incompatible ways.
Here’s one way to show that the single premise / multiple conclusion conditional rules do not define a unique connective in closed set logic. We’ll use very simple algebras that are known to model closed set logic. Finite total orders. To be concrete, we’ll interpret propositions as taking values from some given subset of the interval , at least including and , so each formula will take some value in that set of values, and we’ll interpret a sequent as saying that , which amounts to saying that for some . And similarly, amounts to for some . A sequent holds if the value of the left hand formula (or , if the formula is absent) is less than or equal to the value of one of the right hand formulas. (If the language has conjunction and disjunction, you can interpret them as and respectively, and the top and bottom values are and .)
(What has this to do with closed set logic? An valued algebra corresponds to the closed sets in the topological space of an -element totally ordered set where the closure of a set is its upwards closure in the ordering. is the empty set and is the whole space.)
Now, look at what the sequent rules for a conditional mean in this setting. Collapsing the finite set in the rules to a single formula for simplicity’s sake (without any loss of generality), [R*] tells us that if then . That is, if or then either or . This is to hold for all values for , and . A little bit of algebraic manipulation shows that this is equivalent to saying that when or then .
And [L*] tells us that if and then for all , and . That is, if either or and then . Some more manipulation tells shows that this holds if and only if .
So, [L*] and [R*] are satisfied in our order models when we have
If or then
And this is enough to fix many of the values of , but it is nowhere near enough to fix all of them. They do tell us that whenever , and also, when . But the values for are less constrained. For example these rules are satisfied by setting for all . And they’re also satisfied by setting and when . Provided that there’s at least one extra value between and in the ordering, that gives us wiggle room.
Two truth tables for conditionals.
No wonder we couldn’t show that entails ! In this case (when takes the value and takes the value ) that inference takes us from to . That sequent isn’t valid.
That’s what I meant when I said that the conditional rules were not so good in closed set logic. The rules tell us something about conditionals, but they are not specific or strong enough to characterise a single concept.
To receive updates from this site, subscribe to the RSS feed in your feed reader. Alternatively, follow me at @consequently@hcommons.social, where most updates are posted.