I’ve been thinking about generalisations of Russell’s paradox, cleaning things up so you can’t get around the problem by changing the logic of connectives. I don’t think that mucking around with negation or implication gets to the heart of the issue. (This view is shared by some very insightful people. I haven’t come to it alone.)
Getting around negation and conditionals is surprisingly easy, once you get the proof theory sorted out. I’ve been noodling about with this issue for a year or so now. I presented on this in a talk at the World Congress of Paraconsistency last year, and a bit of it has appeared in my draft discussion of some themes from Hartry Field’s Saving Truth From Paradox.
There, the paradoxical derivations are done in sequent calculi, and they’re not the most perspicuous presentation. I managed to sharpen it up a bit tonight, and the resulting proof is here. It’s not explained in the text of that note: that gives just the definitions and the proof. I hope to get to that soon. But let me use this site to get the ideas out in a rough and ready form.
The gist of the idea is this. Folks like Graham Priest, Hartry Field and Jc Beall think that for every description φ(x) there’s a property <x:φ(x)> of being an x such that φ(x). An object a instantiates the property <x:φ(x)> if and only if φ(a). The traditional problem is this: consider the property <x:x doesn’t instantiate x>. Does this instantiate itself or not? If it does, it doesn’t. If it doesn’t, it does.
The solutions favoured by Priest, Field and Beall (and my former self), though they differ in details, all agree that we should muck around with the logic of negation. (And also the logic of the conditional, as the property <x: if x instantiates x then I’m a monkey’s uncle> is just as problematic: see Curry’s paradox.)
Now, it’s a pain to worry about each different tweak to the logic of negation and the logic of the conditional, and worry about whether this patch or that fix really does solve the problem. (It’s a fun pain, if you like that kind of thing, but a pain nonetheless.)
I’ve been looking at formulations of the problem that avoid all talk of negation, conditionals and other stuff my friends and colleagues can argy bargy about. Instead, I’m trying to make do with the logic of instantiation (that’s implicit in the so-called naïve theory of properties, for which each description φ(x) has a corresponding property <x:φ(x)> of being an x which is φ. An object a instantiates the property <x:φ(x)> if and only if φ(a).) So, we adopt two inference rules:
[εI] From φ(a) infer a ε <x:φ(x)>
[εE] From a ε <x:φ(x)> infer φ(a)
for each open sentence φ( ). (The ‘ε’ is our shorthand for ‘instantiates.’)
Then, we need two more things. First, a sentence that is pretty bad. One from which we can infer everything will do the trick. (If you have a universal quantifier around, ‘everything instantiates everything’ will do nicely. But it isn’t mandatory.) In other words, we have a ‘⊥’ for which
[⊥E] From ⊥ infer any sentence you like.
Finally, we need the logic of identity for properties. You need to have some account of when <x:φ(x)> = <x:ψ(x)> for different sentences φ and ψ. It’d be odd to say that the property of being red and square was a different property from the property of being square and red, wouldn’t it? (The extant naïve theories of properties say little about this. The extant consistency or non-triviality proofs for naïve theories of properties, alas, make different descriptions denote different properties, which is not what you should want.)
So, what can we say that would rule out out distinctions where there is no difference at all? What identity condition works for this sort of property? Extensionality is the identity condition for sets. If the things in set A are the same as the things in set B, then A and B are the same set. That’s clearly too strong for properties. (Think renates and cordates, or featherless bipeds and humans.) But if I can deduce that a ε S from a ε T, and vice versa (where a is aribtrary), using deduction alone and no contingent side conditions, then what difference could there be between property S and property T? None that I can see, that’s for sure. This motivates the following condition.
[=I] If I can deduce a ε S from a ε T, and a ε T from a ε S, with no other side conditions, discharge those assumptions and infer S = T.
(Parenthetical remark: that doesn’t mean that being H2O is the same property as being water, unless you think you can infer that a is H2O from a is water, and vice versa, using logic alone. You can think that they are necessarily coextensive without thinking that. We’re not identifying properties coarsely.)
The rule [=I] tells us when two properties are identical. We need to know what we can infer from the claim that two properties are identical. That seems straightforward. You only get out what you put in:
[=I] From t ε S and S = T, infer t ε T.
That’s five simple inference principles.
Those five inference principles are enough for you to deduce ⊥.
This is bad, since from ⊥ one can validly deduce everything.
How can we deduce ⊥? We use identity and ⊥ to do what we wanted negation to do before our friends and colleagues said negation didn’t do that. That is, consider this property:
<x:<y:x ε x> = <y:⊥>>
That is, consider the property of being an x such that the property that anything has when x instantiates itself as a property is the same thing as the property that nothing has. (In other words, consider the property of not being self instantiating, but we won’t say that, since we have nice arguments about the logic of negation.)
Using [⊥E], [εI], [εE], [=I] and [=E] alone, we can deduce ⊥. Here’s the proof. It has fifteen steps, each one of which is one of those five rules.
I think that this is a serious problem for anyone who likes naïve theories of properties. You’ve got to say which of those rules break down: and by ‘break down’ I mean something very precise. For which of the rules [⊥E], [εI], [εE], [=I] and [=E] are you prepared to accept the premise and reject the conclusion? If you can’t do that, then a forced march down the proof suffices to commit you to ⊥.
So, what will it be?