consequently.org: Greg Restall’s website in consequently.org: Greg Restall’s website Greg Restall's publications on logic and philosophyGreg RestallGreg Restallgreg@consequently.orgPhilosophy, Logic, mathematics, pdf, research, University, Greg Restall, Melbourne, Australia, VictorianoHugo
http://consequently.org/
en-usWed, 15 Nov 2017 09:24:19 AEDTconsequently.org: Greg Restall’s website
http://consequently.org/
Mon, 01 Jan 0001 00:00:00 UTChttp://consequently.org/Presentations
http://consequently.org/presentation/
Mon, 01 Jan 0001 00:00:00 UTChttp://consequently.org/presentation/Proof Identity, Aboutness and Meaning
http://consequently.org/presentation/2017/proof-identity-aboutness-and-meaning/
Mon, 06 Nov 2017 00:00:00 UTChttp://consequently.org/presentation/2017/proof-identity-aboutness-and-meaning/<p><em>Abstract</em>: This talk is a comparison of how different approaches to hyperintensionality, aboutness and subject matter treat (classically) logically equivalent statements. I compare and contrast two different notions of subject matter that might be thought to be representational or truth first – <em><a href="https://www.amazon.com/Aboutness-Carl-G-Hempel-Lecture/dp/0691144958/consequentlyorg">Aboutness</a></em> (Princeton University Press, 2014), and truthmakers conceived of as situations, as discussed in my “<a href="http://consequently.org/writing/ten/">Truthmakers, Entailment and Necessity</a>.” I contrast this with the kind of inferentialist account of hyperintensionality arising out of the <em>proof invariants</em> I have explored <a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">in recent work</a>.</p>
<p>This is a talk presented at the <a href="https://www.gla.ac.uk/schools/humanities/research/philosophyresearch/researchprojects/thewholetruth/formalphilosophy/">Glasgow-Melbourne Formal Philosophy Workshop</a>.</p>
<ul>
<li>The <a href="http://consequently.org/slides/proof-identity-aboutness-and-meaning.pdf">slides are available here</a>.</li>
</ul>
Negation on the Australian Plan
http://consequently.org/presentation/2017/negation-on-the-australian-plan/
Wed, 23 Aug 2017 00:00:00 UTChttp://consequently.org/presentation/2017/negation-on-the-australian-plan/<p><em>Abstract</em>: In this talk, I explain the difference between <em>Australian Plan</em> semantics for negation – which treat negation as a kind of negative modality – and semantics based on the <em>American Plan</em>, which conceive of negation in terms of independent truth and falsity conditions. I will update the presentation of the Australian Plan (introduced in the 1970s in early days of the ternary relational semantics for relevant logics), in the light of more recent developments in logic, and defend this updated plan in the face of some recent criticisms due to <a href="http://www.michaelde.com">Michael De</a> and <a href="https://sites.google.com/site/hitoshiomori/home">Hitoshi Omori</a>, in their paper “<a href="http://dx.doi.org/10.1007/s10992-017-9427-0">There is More to Negation than Modality</a>.” Along the way, I hope to draw out some insights into what we might want out of a representational semantics for a language with a consequence relation.</p>
<p>This talk is based on joint work with <a href="http://www.uva.nl/en/profile/b/e/f.berto/f.berto.html">Professor Franz Berto</a>, from the University of Amsterdam.</p>
<p>This is a talk presented at the <a href="http://blogs.unimelb.edu.au/logic/logic-seminar/">Melbourne Logic Seminar</a>.</p>
<ul>
<li>The <a href="http://consequently.org/slides/negation-on-the-australian-plan-logicmelb.pdf">slides are available here</a>.</li>
</ul>
News
http://consequently.org/news/
Tue, 19 Sep 2017 22:47:48 +1100http://consequently.org/news/Learning and Teaching (the eleventh of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-11-learning-and-teaching/
Tue, 19 Sep 2017 22:47:48 +1100http://consequently.org/news/2017/twelve-things-11-learning-and-teaching/<p>Working in philosophical logic, I love the opportunity to <em>learn</em> from so many people through history, and not only to <em>learn</em>, but to pass on a tradition, and to have the opportunity to extend the tradition, and to refine it a little, in passing it on. It’s been a delight to learn from some <a href="http://consequently.org/writing/logicians/">great figures</a>, the historical figures through their writing, and my contemporaries in person, both as <em>face-to-face teachers</em> (while a student, I learned logic from Sheila Oates-Williams, Neil Williams, Rod Girle, Ian Hinckfuss, and Graham Priest), but the learning doesn’t stop when you finish your degree. I’ve learned much from colleagues (Bob Meyer, Richard Sylvan, John Slaney, Allen Hazen, Graham Priest (again), Zach Weber, Dave Ripley, Shawn Standefer), whose work I admire, and who generously share of their time at whiteboards, in seminars, and in many many conversations. I have also learned a great deal from all of my <a href="http://consequently.org/writing/logicians/">graduate students</a>, who have sent me in directions I never expected to head. If learning logic is (in part) <a href="http://consequently.org/news/2017/twelve-things-05-recognition/">gaining facility with the concepts you have</a>, as well as <a href="http://consequently.org/news/2017/twelve-things-06-expansion/">acquiring new concepts</a>, then working <em>with others</em> is a very good way to learn. The kinds of knowledge you acquire is not merely <em>knowing that</em>, but it (at least in part) skills to be learned, and they’re learned by practice, and sometimes skilled practice is best acquired when guided by others — explicitly, when the teacher knows she is <em>teaching</em> — or implicitly, when we observe an expert displaying expertise, and we can use their practice to scaffold our own, and learn by imitation. I’ve learned so much from my teachers, not only in consciously building on their ideas, but in learning how to <em>be</em> a logician by starting in their footsteps.</p>
<p>One way to develop the tradition is to <a href="http://consequently.org/class/">teach</a>, and a significant part of my job at the <a href="http://unimelb.edu.au">University of Melbourne</a> is to teach, and a lot of my time is spent teaching philosophical logic. Putting together a course is a good way of sorting out your ideas, and in philosophical logic, where the concepts very clearly and precisely build on prior ideas, it is a real intellectual challenge to see how you can teach a course in 12 weeks that, say, <a href="http://consequently.org/class/2017/PHIL30043">introduces Gödel’s incompleteness theorems</a> and everything you need to understand them. This is a challenge, and designing the curriculum in such a way to carry undergraduate students from the basics of predicate logic, through soundness and completeness, into Peano arithmetic, recursive functions, diagonalisation, and into Gödel’s proofs — and to do it in some way that there is hope that the students will survive the journey with their curiosity and interest intact! — forces you to come to grips with the concepts in a way that a cursory understanding won’t suffice. I can truly say that I’ve come to understand things much more deeply when I’ve had the opportunity to <em>teach</em> them.</p>
<p>That’s why I’m enjoying <a href="http://consequently.org/writing/ptp/">writing my new book</a>. Officially, it is a research monograph and not a textbook, but one of the ways I’ve been describing it (to myself, and to others, who ask me what I’m writing on) is that it’s trying to explain to people attracted to normative pragmatic theories of meaning what they can do with recent work in logic (in proof theory, in particular), and to explain to logicians what philosophy they should have to understand why proof theory works so well. Yes, the core of it is a particular <em>argument</em> for a way of understanding the distinctive nature of logic, but that argument is buttressed by a lot of <em>showing</em> as well as <em>saying</em>. You’ve got to <em>learn some logic</em> to understand why it is the kind of thing it is, and to get a real sense of what it can <em>do</em> (and what it can’t). To do that, is to <em>teach</em>, as well as to <em>argue</em>, and that is just how I like it.</p>
<p><em>Learning and Teaching</em> is the eleventh of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Possibility (the tenth of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-10-possibility/
Mon, 18 Sep 2017 14:38:20 +1100http://consequently.org/news/2017/twelve-things-10-possibility/<p>In the <a href="http://consequently.org/news/2017/twelve-things-09-necessity/">previous entry</a> I explored the connection between <em>proofs</em> and <em>necessity</em>. Here, I want to spend a little time exploring the other side of the logical street, the connection between <em>models</em> and <em>possibility</em>. As I have already explained, one core insight from 20th Century work in logic is the fundamental duality between proof theory and model theory. You can define logical notions like validity by way of proofs (a <em>valid</em> argument is certified by the existence of some proof) or by way of models (an argument is shown to be <em>invalid</em> by the existence of some model which serves as a <em>counterexample</em>).</p>
<p>Exploring proofs gives you can account of the different ways that concepts are tied together. (It gives you an account of what is involved in different kinds of necessary connections. The more different proofs you have, the more connections are possible.) Approaching the validity/invalidity boundary on the other side, by way of models, gives you a very different picture of this boundary. Defining more <em>models</em> means having more <em>counterexamples</em>.</p>
<p>Model building is one very fruitful way of articulating what is — and more importantly, what <em>isn’t</em> — a part of a theory. Suppose you are interested in some strange new theory. (Put yourself back into the 19th Century, and consider a strange newfangled theory of geometry, where you accept the first four of Euclid’s axioms, but you <em>deny</em> the parallel postulate.) You’ve got your collection of basic principles, but you’re not sure what follows from them. If you can manage to build a <em>model</em> for your theory, then this can begin to address the question of what the theory involves. In particular, <em>any</em> can give you a decisive answer to some questions about what the theory <em>doesn’t</em> involve. If your model \(\mathfrak M\) gives you a way to interpret all of the basic principles of the theory as being true, and if some other claim \(A\) turns out to be <em>false</em> in \(\mathfrak M\), then you can see how \(A\) <em>doesn’t</em> follow from those basic principles. \(\mathfrak M\) gives you a picture of how the theory could be true, and in this case, it shows how \(A\) comes apart from the axioms of the theory. (So, if you think that \(A\) <em>should</em> be true, according to the theory you’re exploring, you need to supplement your axioms.)</p>
<p>Having a model on hand — in and of itself — gives you little information about what <em>does</em> follow, because theories can have more than one model, in which different things hold. If something is <em>not</em> true in a model for your theory, that tells you that it is not a consequence of the first principles theory; but when something <em>is</em> true in a model for your theory, that’s not necessarily enough to show you that it is a consequence of the first principles of the theory. After all, it might hold in <em>some</em> models, and not others. To show that something does follow (given the soundness and completeness theorems) we need to show that it holds in <em>every</em> model of the theory.</p>
<p>So, models, in and of themselves don’t do <em>everything</em>, but they are an excellent way to open up new areas of logical space. The development of models of non-Euclidean geometries helped us expand our understanding of what is involved in talk of points and lines. The development of models of different modal logics or non-classical logics helps us come to grips with different options for how basic propositional notions such as conjunction, disjunction, negation, conditionality, possibility and necessity might fit together. Models are useful tools for sketching out options.</p>
<p>So, models for theories give us powerful tools for exploring logical notions, and they provide an especially powerful way for expanding our bounds of understanding what is <em>possible</em>. Constructing a model of a theory is one way to show how that theory <em>could be true</em>. I love the way which the <em>possibility</em> of logical space is a wide plenitude, that the different antecedents for a “what if…” lead us in so many different directions.</p>
<p><em>Possibility</em> is the tenth of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Necessity (the ninth of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-09-necessity/
Thu, 14 Sep 2017 23:55:19 +1100http://consequently.org/news/2017/twelve-things-09-necessity/<p>The next two thoughts are motivated by the two complementary aspects of contemporary research in logic, <em>proof theory</em> and <em>model theory</em>. As I try to emphasise to my students, there are two broad ways you can define logical concepts like <em>validity</em>. Following the way of <em>proofs</em>, an argument is valid if there is <em>some</em> proof leading from the premises to the conclusion. Following the way of <em>models</em>, an argument is valid if there is <em>no</em> model in which the premises are true and the conclusion is not. In proof theory, validity is vouchsafed by the existence of something: a <em>proof</em>, which certifies the claim to validity. Invalidity is the absence of such a certificate. In model theory, <em>in</em>validity is vouchsafed by the existence of something: a <em>model</em> — a <em>counterexample</em> to the claim of validity. Validity is the absence of any such counterexample. It was a great intellectual advance to understand that these are two very different ways to define logical concepts, such as validity, and it was a further advance to be able to rigorously prove that (on certain understandings of logic, such as classical first order predicate logic), these two different kinds of definitions can coincide to determine the <em>same</em> concept. A <em>soundness</em> theorem (relating an account of proofs and a account of models) shows that these two notions don’t <em>clash</em>: you never get both a proof showing that some argument is valid, <em>and</em> a model showing that it is invalid. A <em>completeness</em> theorem (also relating an account of proofs and a account of models) shows that these two notions cover the whole field — for each argument, we <em>either</em> have a proof (showing it is valid) or a counterexample (showing that it isn’t).</p>
<p>Having a sound and complete account of proofs and models for a particular understanding of validity gives you a very powerful toolkit: you can approach a question concerning validity in two distinct ways, by the way of proofs (attempting to build a bridge from the premises to the conclusions, or showing that there isn’t any) or by the way of models (attempting to show that there is a chasm between the premises and the conclusions by showing that there is some way to make the premises true and the conclusions untrue, or again, showing that there isn’t any). These two ways of accounting for validity have very different affordances, they are good for different things, both mathematically or technically, and philosophically or conceptually.</p>
<p>I am particularly interested in the kinds of conceptual gains that are possible when applying notions of proof and notions of model, and the modes of thinking that are involved when using these different tools.</p>
<p>One connection that I am beginning to learn is the intimate connection between proof and <em>necessity</em>, between logical consequence and the hardness and fixity of the logical <em>must</em>. It is one thing to think that an argument is valid, in the sense that it happens to fail to have a counterexample. It is another to have an account of <em>why</em> it is valid. What a proof gives you is some kind of account of <em>how</em> you can get from the premises to the conclusion. This kind of thing is quite powerful, especially given the generality of logical concepts. The power of concepts like conjunction, negation, the quantifiers, etc., (I think) is that our norms and rules for using them apply under the scope of suppositions (whether those suppositions are subjunctive alternatives — suppose that \(A\) had been the case — or indicative alternatives — suppose that, after all \(A\) is actually true), if we suppose that \(A\land B\) is true, it’s still totally appropriate (under the scope of that supposition) to deduce \(A\) and to deduce \(B\), the usual rules for conjunction still apply. A <em>proof</em> (on this view) from premises to a conclusion is the kind of chain of reasoning which will work under any different supposition. It shows us how the conclusion is already present, implicit in the premises. To have granted the premises is to be committed (at least implicitly) to the conclusion, and the proof renders that consequential commitment <em>explicit</em>. Of course, when confronted with a proof of an unacceptable conclusion from premises you have accepted, one appropriate response would be to reject one or another of the premises, and to resist the conclusion. That is always an option.</p>
<p>This brings logic up close to issues in <em>metaphysics</em>, in <em>epistemology</em> and in <em>philosophy of language</em>. In metaphysics, we ask questions about the ultimate nature of reality, and the bounds of what is possible, or what is necessary. Of how reality is and how it must be. The kind of necessary connection between premises and conclusion of a valid argument must bring us up to the boundary of metaphysical necessity. If something is metaphysically <em>possible</em>, then it must count as at least logically possible. If there is a way the world is that makes \(A\) true, then \(A\) cannot be logically inconsistent. If we could prove a triviality from \(A\), then this argument would apply were the world to be the way that possibility describes. Proofs in logic tell us <em>something</em> about what is necessary. (Of course, this isn’t to say that anything that is necessary is vouchsafed by a proof. That would be to say much more.)</p>
<p>Similarly, proofs can also play an <em>epistemic</em> and <em>dialogical</em> role. Provided that you and I agree on the norms governing our logical vocabulary, then if we possess a proof from \(A\) to \(B\), we agree that it’s out of bounds to accept \(A\) and reject \(B\). The proof can show us this much, to help map out the conceptual topography, see the space of possible options for us, even if we disagree on which options to take (perhaps you accept \(A\) and \(B\), and I reject both). A proof will do this work, even if we disagree on matters of necessity. Perhaps I take \(A\) to not only be false but to be <em>impossible</em>, and you take \(B\) to be <em>necessary</em>. (Such disputes are common in philosophy.) Regardless of the fact that one or other of us may be beyond the bounds of possibility, dispute here can still be rational. If, in the course of our reasoning, I begin to take your position as a live option (this is surely possible), I now have two positions before me: to accept \(A\) and \(B\), and to take them as <em>necessary</em>, or to reject \(A\) and \(B\) and to take them as <em>impossible</em>. When I do this, I can take something to be an <em>epistemic possibility</em> (a live option) which I think may also be metaphysically <em>impossible</em>. When we use the tools of <em>proofs</em>, we have guides to help see what positions are open to us, even if this does not tell us the whole story of which position may be best to take.</p>
<p>I love the way in which the <em>necessity</em> of the logical <em>must</em> brings us right up to concerns of metaphysics and epistemology, of the nature of reality and what options we have as we attempt to understand it.</p>
<p><em>Necessity</em> is the ninth of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Attention (the eighth of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-08-attention/
Wed, 13 Sep 2017 14:01:56 +1100http://consequently.org/news/2017/twelve-things-08-attention/<p>I’m not totally happy with the word for he next item on the list of <a href="http://consequently.org/news/2017/twelve-things-i-love/%2">twelve things I love about philosophical logic</a>. The word on the list is <em>attention</em>, and it gets at something that I have learned, and which seems to me to be an important distinctive about working in <em>philosophical</em> logic, but I’m not altogether sure that “attention” is the best word for it. Maybe after I’ve explained what I mean, you could suggest a better short label for the phenomenon I’m gesturing towards.</p>
<p>Here’s the core idea: when you spend time working with core logical notions such as <em>consequence</em>, <em>consistency</em>, <em>necessity</em>, <em>possibility</em>, <em>model</em> and <em>proof</em>, you notice that you are attending to judgements and thoughts and claims in more than one way. You learn to distinguish between taking a claim to be <em>true</em>, and considering it as <em>possible</em>. You can agree that even though \(p)) isn’t true, it is <em>consistent</em> with \(q\). You can agree that it’s not true that \(p\) while still seriously entertain what it would be like <em>were</em> \(p\) to be true. Working with \(p\) as an hypothesis is not the same thing as taking it to be true. But even though working under the supposition that \(p\) is not the same thing as taking \(p\) to be true, it is related intimately to it. You don’t just consider at \(p\) from the “outside.” (Say, look at those crazy people who believe \(p\)! Aren’t they weird?) Instead, you “try it on for size” in the sense that you let your inferential norms and processes act on \(p\) as if it were one among the other things you are working with. You temporarily adopt \(p\) into your view of the world, or you change perspective and attempt to see what things look like from the other side of the street, backgrounding your prior commitment to \(\neg p\) (if you actually believe \(p\) is false), and trying a different set of commitments on for size. This moves you in the direction of a kind of intellectual sympathy. You can gain some insight concerning some of what it would be like to actually see things from \(p\)’s point of view.</p>
<p>This is just one way in which familiarity with core concepts of logic facilitates distinct skills for attending to judgement, and thereby, of paying attention to how we attend to the world around us, too.</p>
<p>With all that said, I’m not totally happy with “attention” as the word for this — the skill attained is not the acquisition of <em>sustained, focussed attention</em>. If you’re anything like me, your attention is often scattered, unfocussed, and you’re easily distracted, and if my history of 25 years working in philosophical logic is any testament, it’s not that becoming a logician is in and of itself a great help with dealing with distraction. (There are other practices which foster sustained, focussed attention and awareness, like meditation, prayer, reading, physical activity, etc.). No, instead, what is involved is a kind of suppleness of attention, the ability to shift between different positions, to creatively see things from different sides, and to take in different views. It’s those skills of attention that can be fostered when you spend time with the core concepts of logic.</p>
<p>So, here is another thing that I love in working in philosophical logic—how growing into mastery of core logical concepts has these kinds of consequences for my own thinking, my own <em>attention</em>, and as a result my own <em>life</em>.</p>
<p><em>Attention</em> is the eighth of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Pragmatics (the seventh of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-07-pragmatics/
Tue, 12 Sep 2017 12:53:42 +1100http://consequently.org/news/2017/twelve-things-07-pragmatics/<p>Some of my phrasing in the last <a href="http://consequently.org/news/2017/twelve-things-05-recognition/">two</a> <a href="http://consequently.org/news/2017/twelve-things-06-expansion/">posts</a> about what I love about philosophical logic have emphasised <em>capacities</em>, or <em>abilities</em>. I’ve described the <a href="http://consequently.org/news/2017/twelve-things-05-recognition/">pleasure of the “<em>aha</em>!” moment</a> in terms of the kinds of mastery you acquire in handling the concepts you have, and I described <a href="http://consequently.org/news/2017/twelve-things-06-expansion/">the joys of conceptual expansion</a> in terms of abilities gained. This is to take a <em>pragmatic</em> perspective on logic, to consider the connection to practices and actions.</p>
<p>Thinking of things “pragmatically” can be understood in a very crude way, fixing in advance what how you want to measure costs and benefits, and then doing some naïve cost/benefit calculation and then choosing option that somehow maximises benefits and minimises the costs (if that is even possible). This is not what I mean when I consider the connection between logic and pragmatics. I don’t think that the best way to select some logical system or logical theory is on the basis of that kind of cost/benefit analysis. Rather, it’s that there are connections between features of logical systems and the practices of <em>asserting</em>, <em>denying</em>, <em>inferring</em>, <em>questioning</em>, etc. What kind of connections are there? It’s not that the laws of logic are descriptively correct as a theory about how assertion and denial and inference actually work in practice. Rather, they can be understood as norms governing how those acts can be evaluated. (In particular, I think that if the argument from the premise \(A\) to the conclusion \(B\) is <em>valid</em>, then taking a position in which \(A\) is asserted and \(B\) is denied is <em>out of bounds</em>. If you’ve asserted \(A\) and \(B\) follows from \(A\) then in some sense, \(B\) is <em>undeniable</em>, in that any positions where you rule \(A\) in and \(B\) out are out of bounds. For more on this, take a look at my “<a href="http://consequently.org/writing/multipleconclusions">Multiple Conclusions</a>”, the <a href="https://scholar.google.com.au/scholar?cites=2800898225913341308&as_sdt=2005&sciodt=0,5&hl=en">critical literature it’s spawned</a>, and the <a href="http://consequently.org/writing/ptp">manuscript I’m working on</a> right now.) There’s much more to say about this, but I think that it’s a clarifying perspective on the connection between core concepts in logic and the different kinds of acts we can actually engage in, like asserting, or believing, like denying or rejecting. It is a <em>normative pragmatic</em> position concerning logic, that the concepts provide norms or standards by which acts can be evaluated.</p>
<p>You might worry that thinking of logic in terms of rules for a contingent human practice makes those rules themselves contingent too. Now, there’s nothing wrong with laws and rules being contingent. However, it’s a radical view of <em>logic</em> that takes the rules of logic contingent on human practices. If it’s a law of logic that either \(p\) or not \(p\) for all propositions \(p\), then it would seem to follow that either all non-avian dinosaurs were extinct by 65 million years ago, or not all non-avian dinosaurs were extinct by 65 million years ago, and that this <em>still</em> would have been the case even if there weren’t any people (or sentient creatures) around to reason about it. Contingently existing human reasoners like us can reason about all sorts of things, including what went on before contingently existing human reasoners existed.</p>
<p>Here’s an analogy I find compelling and clarifying when it comes to understanding how we can have a contingent practice with non-contingent rules. It’s the example of arithmetic and our counting practices. It’s contingently useful to creatures like us to engage in counting practices, introducing vocabulary for things we call “numbers”, which codify various practices of enumerating and pairing things up. Given that we want to engage in such a contingent practice (we have reason to keep track of the number of sheep we have in our flock, to make sure trades are fair, etc., but there was a time before any humans were doing these things), we have (again, contingent) reasons to use counting practices like those we actually have. At the very least, our counting practices give us ways to attend to patterns among practices of pairing things up. (It’s easy enough to figure out that if you have five sheep and I promised to give you three bags of grain for each of your sheep that I’ll owe you 15 bags of grain, than to laboriously pair up three bags with each sheep.) But the (contingently existing) practice of using number vocabulary in this way gets its power by having results that apply <em>invariantly</em> and <em>necessarily</em>. It’s not necessary that we have the concepts of <em>3</em> and <em>5</em> and <em>multiplication</em>, but it is necessary that if we do have concepts like these, governed in this way, then no matter what they’re counting, 3 times 5 is 15, and it is, necessarily. (Why do we want such <em>necessity</em>? I’d say that this is tied up with interaction between counting and planning: it is also true if I’ve promised three bags of grain for every sheep, then if I want 2 sheep, I owe you 6 bags; 3 sheep, then 9, etc. It’s not that the rules of counting apply differently in different hypothetical scenarios. They are applied the same way in all hypothetical scenarios.) A practice can be contingently useful while having norms that apply non-contingently.</p>
<p>The same holds, I think, for so-called logical laws, and the account <a href="http://consequently.org/writing/multipleconclusions">I prefer</a> puts this down to norms governing assertion and denial. The laws of logic can be understood as arising out of fundamental norms governing the practices of assertion and denial, and their interrelationship. The <em>generality</em> of certain laws of logic can be explained in terms of norms applying to assertion and denial <em>as such</em>, independently of any specific subject matter of those assertions and denials. The behaviour of the regular propositional connectives can be explained as ways to make explicit what is already implicit in the practice of assertion and denial. The behaviour of modal operators can be understood in terms of making explicit <a href="http://consequently.org/writing/cfss2dml/">norms governing different kinds of supposition</a>, while those for quantifiers make explicit relations of <a href="http://consequently.org/writing/generality-and-existence-1/">substitution and generality</a>, once the practice of assertion and denial is rich enough to involve singular terms. The story, I think, is rich in connections.</p>
<p>I love how philosophical logic is—when rightly understood—tied up with <em>practices</em> and <em>activities</em>. Through these connections, we see that understanding the grounds of our conceptual capacities brings logicians into the realm of practical action, in a way rather different to the picture that logicians are calculators solving predefined problems. Instead, logic is a normative discipline which describes some of the norms governing practices of assertion, denial, description, theorising, conjecture, and the like.</p>
<p>You don’t often find human concerns, our own contingent and local interests, preferences and desires — let alone the social and political concerns of life in a community — playing an explicit role inside a philosophical logician’s proof. These would be as alien there as they would inside a mathematical demonstration. However, this does not mean that these considerations are divorced from philosophical logic. After all, our interest in matters of logic have grown up with our interest in the communicative practices of asserting, denying, arguing and reasoning, and those are nothing if not social practices. Our own contingent and local circumstances and interests help explain why concepts like those from logic are worth using. It is a loss to the discipline if we don’t heed that connection.</p>
<p>The connection with <em>pragmatics</em> is the seventh of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Conceptual Expansion (the sixth of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-06-expansion/
Mon, 11 Sep 2017 09:26:50 +1100http://consequently.org/news/2017/twelve-things-06-expansion/<p>There are different delights to be found in working with concepts. It is not all a matter of <a href="http://consequently.org/news/2017/twelve-things-05-recognition/">gaining greater mastery</a> of the concepts you have already acquired. There is also a special delight to be found in acquiring <em>new</em> concepts. I love that feeling of progress when you make a conceptual advance. A common way to do this is to <em>disambiguate</em>, to clarify matters by noticing that what you took to be one thing is really <em>two</em>. This is the clarity gained in uncovering a hidden confusion, the moment when ideas are sharpened and distinguished, when you form a new vocabulary and, as a result, you are able to say things you couldn’t express before. This one way to reap the rewards of <em>conceptual expansion</em>. Our repertoire of concepts is larger than it was.</p>
<p>Here’s an example of this phenomenon. (It’s not an uncontroversial example, but it’s one that I find quite compelling.) Consider what it means to <em>suppose</em> something. This is something we do regularly when we’re reasoning, when we’re planning, considering options, or discussing something with people who have different views. Often we “try a claim on for size”, suppose it’s the case, and reason from there. (This kind of dialectical move is something we not only <em>do</em>, it’s also at the heart of different accounts of the <a href="http://consequently.org/writing/ptp">structure of proof</a>.)</p>
<p>It’s an insight — a conceptual advance — to notice that the act of supposition can take different forms, that not all supposition is the same.</p>
<figure>
<img src="http://consequently.org/images/kennedy-shot.jpg" alt="NYT Times Kennedy Cover">
<figcaption>Oswald shot Kennedy. But what if he hadn't? Or what if he <b>didn't</b>?</figcaption>
</figure>
<p>If I suppose that it’s not the case that Oswald shot Kennedy, I can do this in two different ways.</p>
<p>I can <em>counterfactually</em> suppose (I can suppose that Oswald <em>hadn’t</em> shot Kennedy) and here I still grant that Oswald <em>did</em> shoot Kennedy (or at least, I leave it <em>open</em> that he did) and I could explore what would have followed in the (not-necessarily actual) circumstances where he <em>didn’t</em>. We do this sort of thing when we plan for the future or regret the past. We consider alternate possibilities, with an eye to understand what we can do, and what options are available to us.</p>
<p>But this is not the only form of supposition: we <em>indicatively</em> suppose when we ask ourselves whether we might be wrong, or when we consider what things are like from a point of view other than ours. When we suppose that Oswald <em>didn’t</em> shoot Kennedy, we take on a different view of how things <em>are</em>. I consider the option that it didn’t happen as I have taken it to happen, but that it actually happened in some other way. This is the kind of supposition involved when we’re considering opposing views and weighing different theories of how things are.</p>
<p>Once you realise that we can be doing different things in different forms of supposition means that you have the space to allow these kinds of supposition to operate in different ways, and to explore their distinct features. (This is the direction I pursue things in my paper on a <a href="http://consequently.org/writing/cfss2dml/">cut-free hypersequent calculus for two-dimensional modal logic</a>.)</p>
<p>The generation of <em>new</em> concepts is a different kind of conceptual mastery than the working out of consequences I discussed in the <a href="http://consequently.org">previous entry</a>. Here, instead of gaining mastery of an already established practice, we institute a new practice, and gain the ability to say new things.</p>
<p><em>Conceptual Expansion</em> is the sixth of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
The Moment of Recognition (the fifth of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-05-recognition/
Sat, 09 Sep 2017 14:29:39 +1100http://consequently.org/news/2017/twelve-things-05-recognition/<p>Here is a more personal reflection on what I love in working in philosophical logic.</p>
<p>I love the “<em>aha</em>!” moment of <em>recognition</em>. This is the relief of a proof completed, or a counterexample found. It is the delight of gaining clarity into something that you had only dimly understood, or the dawning realisation that an assumption you had made is in fact false and a whole new vista of possibilities opens up to you.</p>
<p>The particular kind of “aha” that I mean is the kind where you’re working out of the consequences of something you already know. This can be understood as a kind of <em>mastery</em> that is gained when you become familiar with the conceptual tools you’re using. It is the acquisition of greater skill.</p>
<p>This is the “aha” that students in my second year logic class experienced when they figured out for themselves that not all symmetric and transitive relations must be reflexive. In one sense, they already <em>knew</em> the definitions of these concepts (at least, most of them did) and this fact was implicit in what they already knew, but now they had figured this out for themselves — they <em>saw</em> it for themselves. They understood something new about how these concepts fit together, how they relate.</p>
<p>There is a <em>lot</em> of scope for this when working in philosophical logic. We’re pushing concepts to their limits, finding the boundaries of conceptual space. We map out its topography. Sometimes you <em>think</em> that things hang together in some way (say, your examples of symmetric and transitive relations all happened to be reflexive, too) and then you suddenly <em>see</em> that they’re not. That moment of recognition is the dawning of new light, the opening up of new territory, the acquisition of new conceptual capacities, and moments like these are to something to be treasured.</p>
<p><em>The Moment of Recognition</em> is the fifth of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Interdisciplinarity (the fourth of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-04-interdisciplinarity/
Fri, 08 Sep 2017 14:27:05 +1100http://consequently.org/news/2017/twelve-things-04-interdisciplinarity/<p>The <a href="http://consequently.org/news/2017/twelve-things-03-multiple-realisability/">multiple realisations</a> of a concept in logic often come from different disciplines. One thing I’ve grown to love in philosophical logic is the way different ideas, disciplines and traditions are connected in the space of the wider generality of formal logic. In my own work over the years in <a href="http://consequently.org/writing/isl">substructural logic</a>, <a href="http://consequently.org/writing/pluralism/">logical pluralism</a> and <a href="http://consequently.org/writing/ptp">proof theory</a> (among other things), traditions in computer science, linguistics, mathematics and philosophy have all played distinct roles.</p>
<p>Each discipline has its own examples, its own traditions, its own heroes, its own villains—and its own concerns. If you are aware of the distinctive features of different traditions, this allows for the strengths of those disciplines to shine, for the insights and examples of one discipline to be brought to bear on the questions and problems of others. If you’re a philosopher, you should, by nature, be interested in more than your own traditions—or at least you should if you understand philosophy in the way Wilfrid Sellars did:</p>
<blockquote>
<p>The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term. — Wilfrid Sellars, “Philosophy and the Scientific Image of Man”</p>
</blockquote>
<p>Through the meeting ground of <em>logic</em>, the linguist can speak to the mathematician, the computer programmer to the philosopher. All too often, we don’t step outside our own disciplinary bubbles, but in logic, staying inside your own hermetically sealed discipline actually takes effort on your own part. In the late 20th Century into the early 21st, the best work in logic is being done by linguists, computer scientists, mathematicians and philosophers. No one academic discipline is in the ascendancy in logic. You’re missing out if you don’t attend to the richer tapestry of that work, and in particular, you have much to gain by learning from the best work in traditions other than your own. Since logic is such a well-worn meeting place between these disciplines, those who have some training in logic have a head start when it comes to translating from one tradition to the other.</p>
<p>Sometimes interdisciplinary is understood as a relatively recent trend, and in many cases it is. Regardless, the concerns of logic naturally lend themselves to application in any different fields where we are concerned with judgement, with truth, with the way our claims hold together and bear on each other—and that is a broad tapestry. Logic has <em>always</em> been connected to philosophy and to mathematics, and with the rise of newer disciplines such as linguistics and computer science, the concerns of logic are deeply embedded in many domains of inquiry. Being a logician gives you a passport into these fields, and it is a pleasure to be able to venture widely, and to enjoy different scenery.</p>
<p><em>Interdisicplinarity</em> is the fourth of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Multiple Realisability (the third of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-03-multiple-realisability/
Thu, 07 Sep 2017 15:34:51 +1100http://consequently.org/news/2017/twelve-things-03-multiple-realisability/<p>Closely connected to the notion of <em>abstraction</em>, I love the way that logical concepts are <em>multiply realisable</em>. An abstract structure can be instantiated in different ways, and often in ways completely unforeseen when the original abstraction was made.</p>
<p>The twin moves of abstraction (moving from the particular to the general) and concretisation (going back from the general to the particular—perhaps to a new and <em>different</em> particular) in different domains brings different insights, different models, and new connections. These new connections often bring fresh insight.</p>
<p>For example, the simple notion of a <em>Boolean algebra</em> can be instantiated as a power set algebra (think of the subsets of a set and the operations of union, intersection and complementation). But this simple idea of a power set Boolean algebra can then be understood in different domains of application: you can think of the underlying set as a domain of <em>objects</em> and the subsets are extensions of different predicates. Or you can think of them as a set of possible worlds, and the subsets are propositions. And so on. Shifting from one representation to another is often conceptually fruitful when thinking about these different domains, or thinking about the ways that the formal techniques are applied. And you can answer questions about anything that counts as a Boolean algebra in one go. Here’s how the Hungarian recursion theorist Rózsa Péter put it:</p>
<blockquote>
<p>The writing down of a formula is an expression of our joy that we can answer all these questions by means of one argument.</p>
<p>— Rózsa Péter <em><a href="https://www.amazon.com/Playing-Infinity-Mathematical-Explorations-Excursions/dp/0486232654/consequentlyorg">Playing with Infinity</a></em></p>
</blockquote>
<figure>
<img src="http://consequently.org/images/Rozsa-Peter.jpg" alt="Rózsa Péter">
<figcaption>Rózsa Péter.</figcaption>
</figure>
<p>But more than that, once you’ve studied Boolean algebras for a while you learn that not all Boolean algebras are (isomorphic to) power set algebras. Given a domain of objects, you can get a Boolean algebra out of less than the collection of <em>all</em> of the subsets of the underlying set of objects.</p>
<p>Here’s an example: take an infinite set <em>D</em>, and think of the finite subsets of <em>D</em> — those with finitely many members — together with the co-finite subsets of <em>D</em> — those which contain all of the elements of <em>D</em> <em>except</em> for finitely many members. (In the case of the natural numbers, the sets {0,1,2} and all of the numbers <em>other than</em> {1,2,3} would be fine, but the set of even numbers doesn’t count, because it contains infinitely many members and excludes infinitely many members too.) Then these sets, the finite and co-finite subsets of <em>D</em> form a Boolean algebra. (The union, intersection or complement of finite and co-finite sets is itself finite or co-finite.)</p>
<p>Once you see this, you see that there’s nothing in the idea of a Boolean algebra that means that the structure of truth values, or extensions of predicates, or propositions, <em>must</em> look like a power set Boolean algebra. Maybe it does, but maybe it doesn’t.</p>
<p>The move from the particular to the general and back not only allows us to transfer insights from one domain to another, it also means that we can gain insight into unconsidered possibilities, and different ways our concepts can be realised.</p>
<p><em>Multiple realisability</em> is the third of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Abstraction (the second of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-02-abstraction/
Wed, 06 Sep 2017 13:00:03 +1100http://consequently.org/news/2017/twelve-things-02-abstraction/<p>As I mentioned in the previous entry, philosophical logic uses the tools and techniques from formal logic, and formal logic is nothing if it is not <em>abstract</em>. It gets its power — as well as its weaknesses, to be sure — by abstracting away from specifics and moving to generalities. We explain the virtues of a particular argument (in part) by looking at its form, the structure which is in common to other arguments of the same shape. This goes back, at least, to Aristotle, who taught us that it isn’t a coincidence that both syllogisms</p>
<blockquote>
<p>All footballers are bipeds. All bipeds have feet. Therefore all footballers have feet.</p>
<p>All wombats are cute. All cute things are popular. Therefore all wombats are popular.</p>
</blockquote>
<p>have similar virtues. At the very least, they’re both <em>valid</em>. They both have the form:</p>
<blockquote>
<p>All <strong>F</strong>s are <strong>G</strong>s. All <strong>G</strong>s are <strong>H</strong>s. Therefore all <strong>F</strong>s are <strong>H</strong>s.</p>
</blockquote>
<p>and any syllogisms with that form are valid. Attending to the shape of the reasoning, and “tuning out” concern about whether the premises are <em>true</em> (are <em>all</em> wombats cute? Are <em>all</em> footballers bipeds? — most likely not) and focussing on the form, we see how the premises and conclusions are connected.</p>
<p>To study <em>form</em> or <em>structure</em> is to learn how to attend to one thing and to ignore others, to look for a new level of generality. I love to take the opportunity to stand back, to look at a problem again from a different angle, to reframe it in a different way, to attend to it again, perhaps to see something new, to notice the parallels between one thing and another.</p>
<p>Thinking of the role of <em>abstraction</em> involved in formal logic brings to the fore the aspect of logic that is a <em>design</em> task. Logic is a kind of conceptual engineering. It is always a choice to attend to focus on some features of a problem and to ignore others. Being formal and abstract, logic allows us to stand back and look for structure, to look for patterns — and the result is the delight in recognising a unifying pattern that helps us see something that we didn’t see before.</p>
<p><em>Abstraction</em> is the second of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
The Dialectic (the first of twelve things I love about philosophical logic)
http://consequently.org/news/2017/twelve-things-01-the-dialectic/
Tue, 05 Sep 2017 13:36:03 +1100http://consequently.org/news/2017/twelve-things-01-the-dialectic/<p>One thing I noticed when making my way from mathematics (my undergraduate degree was a B.Sc. in Mathematics at the <a href="http://www.smp.uq.edu.au/mathematics">University of Queensland</a>) to philosophy was the different approach when doing research in the two disciplines. To put it very coarsely, in mathematics, you prove theorems. In philosophy, you argue about things.</p>
<p>The standards for success are very different in philosophy and in mathematics. Witness <a href="https://arxiv.org/abs/1708.03486">Norbert Blum’s recent retraction</a> of his paper which purported to prove that <em>P</em> ≠ <em>NP</em>. While philosophers change their minds about things, I don’t recall anyone going so far as to retract a paper that argued for a position they now reject. That’s just not how philosophers work, and nor should they.</p>
<p>One of the joys about working in philosophical logic — especially for someone with a relatively short attention span, like me — is that I get to play on both sides of this street. I spend some time as a technical mathematical logician, playing the theorem-proving game, with all of the satisfaction of knowing that I’ve really <em>proved</em> something solid in its own way: a mathematical <em>result</em>. On the other hand, there’s more to life than theorems, and there’s more to <em>understanding</em> than the making of proofs. I love that my discipline — <em>philosophical</em> logic — gives equal time to the discursive, interpretive, philosophical side of the enterprise, that I can spend time writing papers attempting to give an account of <em>how</em> something works, to argue with others, developing views about the <em>grounds</em> or the <em>significance</em> of different concepts or techniques, that there are no barriers to taking the synoptic view, where conjectures can be explored and where perspectives can clash and collide, without expecting that any option be closed off to inquiry.</p>
<p>The joy in working philosophical logic is more, though, than having two sides to the coin, the <em>formal</em>/<em>technical</em> and the <em>discursive</em>/<em>interpretive</em>. The delight I find in the discipline is in the <em>dialectic</em> or the <em>interplay</em> between these two aspects of the craft. This delight comes when some technical result can shed light on a philosophical conundrum, or when a different interpretive perspective on problem uncovers the way to a new approach to prove a theorem. A recent example dear to my heart on the interplay between the discursive and the formal is how Mark Lance and Heath White’s work on the two forms of supposition in their “<a href="https://www.philosophersimprint.org/007004/">Stereoscopic Vision</a>” motivated and inspired my work on a <a href="http://consequently.org/writing/cfss2dml/">cut-free hypersequent calculus for two-dimensional modal logic</a>, which, in turn, has philosophical significance of its own on how we might acquire modal concepts and coordinate on their use, even when we disagree on what might be necessary or <em>a priori</em> knowable.</p>
<p>I find myself in a field where the best work involves formal results addressing issues that have philosophical significance, where discursive and the technical aspects play important, interlocking roles. If you formally model a theory, you nail your colours to the mast. You have to be specific and precise about what is being proposed. This (when done well) keeps you honest. It’s harder to hide or to fudge when you’re specific and precise about your theory’s commitments. On the other hand, the philosophical imperative — to understand, to probe the foundations, and to take the synoptic view — means that you don’t treat the formal theory as something to be explored for its own sake. Instead, you are always able to take the critical perspective, to ask whether <em>this</em> is the best model for the phenomenon in question, and to push beyond.</p>
<p>It is the <em>dialectic</em> between the formal and the discursive; the <em>dance</em> between the technical and the critical, that makes philosophical logic such a joy.</p>
<p>The <em>Dialectic</em> is the first of <a href="http://consequently.org/news/2017/twelve-things-i-love/">twelve things that I love about philosophical logic</a>.</p>
Twelve things I love about philosophical logic
http://consequently.org/news/2017/twelve-things-i-love/
Mon, 04 Sep 2017 20:21:54 +1100http://consequently.org/news/2017/twelve-things-i-love/<p>Over this last weekend, I spent some time tidying out one of the electronic “junk drawers” of my writing life, a folder full of thousands upon thousands of little scraps of text, ranging from minutes of meetings, recipes I’ve saved, little ideas I came across which I wanted to save, lists of places to visit when travelling, and many other kinds of digital flotsam and jetsam I’ve collected over around 20 years of being online, reading and writing.</p>
<p>There was a <em>lot</em> of junk in that big pile of text that I deleted on sight (though there were a few recipes I’m looking forward to trying out in the next little while) but one thing really surprised me. It was a short list, entitled “<em>12 things I love about philosophical logic</em>”. That scrap of writing was about 200 words—the “12 things” are each elaborated with only a sentence or two. I wrote it about five years ago, and I’d totally forgotten about it until coming across it this weekend. Rereading it, the ideas resonated. (My views haven’t shifted <em>that</em> much over five years.) What resonated wasn’t just that I agreed with my earlier self—but that I found the thoughts helpful, and they struck me as the kind of thing that you don’t often hear. Maybe other logicians have thought or said or written things like this, but if they have, I haven’t heard them. It seems to me that we don’t often reflect on the pleasures of our discipline, and we don’t often commit to text much about what it is like to work in our field, or to highlight what it means to us. Reading these words from five years ago clarified some things for me, so it seems to me that it’s at least <em>possible</em> that they might be of some use to others, too. Maybe seeing how things appear from here can help you get some more insight into how things are for <em>you</em>, whether you work in philosophical logic, you work in some other field, or you’re a curious outsider who wants to get some sense of what it is that we philosophical logicians do with our time.</p>
<figure>
<img src="http://consequently.org/images/twelve-things.jpg" alt="Just a part of my junk drawer">
<figcaption>Just a small part of my junk drawer.</figcaption>
</figure>
<p>So, here’s what I’ll do: I’m going to spend some time expanding my “12 things” notes, and I’ll post them at roughly one per day, over the next couple of weeks. Come back <em>tomorrow</em> for the first of the 12 things I love about working in philosophical logic. By the end, the list below will contain links to each entry.</p>
<ol>
<li><a href="http://consequently.org/news/2017/twelve-things-01-the-dialectic/">The Dialectic</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-02-abstraction/">Abstraction</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-03-multiple-realisability/">Multiple Realisability</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-04-interdisciplinarity/">Interdisciplinarity</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-05-recognition/">The Moment of Recognition</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-06-expansion/">Conceptual Expansion</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-07-pragmatics/">Pragmatics</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-08-attention/">Attention</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-09-necessity/">Necessity</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-10-possibility/">Possibility</a></li>
<li><a href="http://consequently.org/news/2017/twelve-things-11-learning-and-teaching/">Learning and Teaching</a></li>
<li>Community</li>
</ol>
<p>Oh, before I forget, I should add a qualification. This list is idiosyncratic and particular in a number of different ways. I don’t expect that what I love is what others who work in philosophical logic love, and neither do I mean to imply that these joys are <em>only</em> to be found when you work in philosophical logic. In giving this list, I don’t mean to universalise to other people’s experience, or to claim any particular distinction for my discipline in comparison to others.</p>
<p>With that said, I’d love to hear back from you, especially if these thoughts spark any reflections of your own.</p>
Writings
http://consequently.org/writing/
Mon, 01 Jan 0001 00:00:00 UTChttp://consequently.org/writing/Two Negations are More than One
http://consequently.org/writing/two-negations/
Fri, 11 Aug 2017 00:00:00 UTChttp://consequently.org/writing/two-negations/<p>In models for paraconsistent logics, the semantic values of sentences and their negations are less tightly connected than in classical logic. In “American Plan” logics for negation, truth and falsity are, to some degree, independent. The truth of \({\mathord\sim}p\) is given by the falsity of \(p\), and the falsity of \({\mathord\sim}p\) is given by the truth of \(p\). Since truth and falsity are only loosely connected, \(p\) and \({\mathord\sim}p\) can both hold, or both fail to hold. In “Australian Plan” logics for negation, negation is treated rather like a modal operator, where the truth of \({\mathord\sim}p\) in a situation amounts to \(p\) failing in <em>certain other situations</em>. Since those situations can be different from this one, \(p\) and \({\mathord\sim}p\) might both hold here, or might both fail here.</p>
<p>So much is well known in the semantics for paraconsistent logics, and for first degree entailment and logics like it, it is relatively easy to translate between the American Plan and the Australian Plan. It seems that the choice between them seems to be a matter of taste, or of preference for one kind of semantic treatment or another. This paper explores some of the differences between the American Plan and the Australian Plan by exploring the tools they have for modelling a language in which we have <em>two</em> negations.</p>
<p>This paper is dedicated to my friend and mentor, Professor Graham Priest.</p>
Proof Theory and Philosophy
http://consequently.org/writing/ptp/
Sun, 03 Oct 2010 00:00:00 UTChttp://consequently.org/writing/ptp/<p>This is my next book-length writing project. I am writing a book which aims to do these things:</p>
<ol>
<li /> Be a useable textbook in philosophical logic, accessible to someone who’s done only an intro course in logic, covering at least some model theory and proof theory of propositional logic, and maybe predicate logic.
<li /> Be a user-friendly, pedagogically useful and philosophically motivated presentation of cut-elimination, normalisation and conservative extension, both (a) why they’re important to meaning theory and (b) how to actually <em>prove</em> them. (I don’t think there are any books like this available, but I’d be happy to be shown wrong.)
<li /> Present the duality between model theory and proof theory in a philosophically illuminating fashion.
<li /> Teach both formal philosophical logic in such a way that is not doctrinaire or logically partisan. That is, I will <em>not</em> argue that classical logic, or that intuitionistic logic, or that My Favourite Logic is the One True Logic. (Of course, hearing me say this is <a href="http://consequently.org/writing/logical_pluralism/">not a surprise</a>.)
<li /> I am (at this stage, at least) planning to make the book <a href="http://consequently.org/news/2004/03/18/publishing_a_book">available for download</a> as well as published by an academic publisher.
</ol></p>
<p>The first couple of chapters are now available here: <a href="http://consequently.org/papers/ptp.pdf">pdf with hyperlinks</a>.</p>
Classes
http://consequently.org/class/
Mon, 01 Jan 0001 00:00:00 UTChttp://consequently.org/class/PHIL40013: Uncertainty, Vagueness and Disagreement
http://consequently.org/class/2017/phil40013/
Tue, 25 Jul 2017 00:00:00 UTChttp://consequently.org/class/2017/phil40013/<p><strong><span class="caps">PHIL40013</span>: Uncertainty, Vagueness and Paradox</strong> is a <a href="http://unimelb.edu.au">University of Melbourne</a> honours seminar subject for fourth-year students. Our aim in the Honours program is to introduce students to current work in research in philosophical logic.</p>
<figure>
<img src="http://consequently.org/images/pro-con.jpg" alt="Assertions and denials take a stand…">
<figcaption>Assertions and denials take a stand on something.</figcaption>
</figure>
<p>In 2017, we’re covering the connections between proof theory and philosophy. Here’s the reading list, if you’re interested in following along.</p>
<ol>
<li><em>Introduction and Overview, Background</em></li>
<li><em>Introduction to Inferentialism</em>
<ul>
<li>Robert Brandom, <em>Articulating Reasons: an introduction to inferentialism</em>, Harvard University Press, 2000. Introduction and Chapter 1 “Semantic Inferentialism and Logical Expressivism.”</li>
</ul></li>
<li><em>The Tonk Debate</em>
<ul>
<li>Arthur Prior, “The Runabout Inference-Ticket”, <em>Analysis</em> 21:2 (1960) 38–39.</li>
<li>J. T. Stevenson, “Roundabout the Runabout Inference-Ticket”, <em>Analysis</em> 21:6 (1961) 124–128.</li>
<li>Nuel D. Belnap, “Tonk, Plonk and Plink”, <em>Analysis</em> 22:6 (1962) 130–134.</li>
</ul></li>
<li><em>Natural Deduction and Normalisation</em>
<ul>
<li>Greg Restall, <em>Proof Theory and Philosophy</em> Draft, Chapter 1.</li>
<li>Dag Prawitz, <em>Natural Deduction: A Proof Theoretical Study</em>, Almqvist and Wiksell, 1965. Chapters 1-4.</li>
</ul></li>
<li><em>Harmony and Meaning</em>
<ul>
<li>Prawitz “On the Idea of a General Proof Theory”, Synthese 27 (1974) 63–77.</li>
<li>Michael Dummett: <em>The Logical Basis of Metaphysics</em>, Harvard University Press, 1991. Chapter 9 “Circularity, Consistency and Harmony”</li>
<li>Gillian Russell, “The Justification of the Basic Laws of Logic,” <em>Journal of Philosophical Logic</em> 44:6 (2015) 793–803.</li>
</ul></li>
<li><em>Sequent Calculus and Cut Elimination</em>
<ul>
<li>Greg Restall, <em>Proof Theory and Philosophy</em> Draft, Chapter 2</li>
<li>Michael Kremer, “Logic and Meaning: The Philosophical Significance of the Sequent Calculus“, <em>Mind</em> 97 (1998), 50–72.</li>
<li>Francesca Poggiolesi, <em>Gentzen Calculi for Modal Propositional Logic</em>, Springer, 2011. Chapter 1 “What Is a Good Sequent Calculus?”</li>
</ul></li>
<li><em>Assertion and Denial</em>
<ul>
<li>P. T. Geach, “Assertion”, <em>The Philosophical Review</em>, 74:4 (1965), 449–465.</li>
<li>Robert Brandom, “Asserting”, <em>Noûs</em>, 17:4 (1983), 637–650.</li>
<li>Huw Price, “Why ‘Not’?”, <em>Mind</em>, 99:394 (1990), 221–238.</li>
<li>Ian Rumfitt, “‘Yes’ and ‘No’”, <em>Mind</em> 109:436 (2000), 781–823.</li>
</ul></li>
<li><em>Multiple Conclusions</em>
<ul>
<li>Greg Restall, “Multiple Conclusions”, pp 189–205 in <em>Logic, Methodology and Philosophy of Science: Proceedings of the Twelfth International Congress</em>, edited by Petr Hájek, Luis Valdés-Villanueva and Dag Westerståhl, KCL Publications, 2005.</li>
<li>Florian Steinberger, “Why Conclusions Should Remain Single,” <em>Journal of Philosophical Logic</em> 40 (2011), 333–355.</li>
</ul></li>
<li><em>Truth Values and Proof Theory</em>
<ul>
<li>Greg Restall, “Truth Values and Proof Theory,” <em>Studia Logica</em>, 92:2 (2009) 241–264.</li>
<li>Dave Ripley, “Bilateralism, Coherence and Warrant,” pp. 307–324 in <em>Act-Based Conceptions of Propositional Content</em>, edited by Friederike Moltmann and Mark Textor, Oxford University Press, 2017.</li>
</ul></li>
<li><em>Beyond Declaratives</em>
<ul>
<li>Nuel Belnap, “Declaratives are Not Enough”, <em>Philosophical Studies</em>, 59 (1990), 1–30.</li>
</ul></li>
</ol>
<p>For further information, contact me. To participate, check <a href="https://handbook.unimelb.edu.au/view/2017/PHIL40013">the handbook</a>.</p>
PHIL20030: Meaning, Possibility and Paradox
http://consequently.org/class/2017/phil20030/
Tue, 25 Jul 2017 00:00:00 UTChttp://consequently.org/class/2017/phil20030/
<p><strong><span class="caps">PHIL20030</span>: Meaning, Possibility and Paradox</strong> is a <a href="http://unimelb.edu.au">University of Melbourne</a> undergraduate subject. The idea that the meaning of a sentence depends on the meanings of its parts is fundamental to the way we understand logic, language and the mind. In this subject, we look at the different ways that this idea has been applied in logic throughout the 20th Century and into the present day.</p>
<p>In the first part of the subject, our focus is on the concepts of necessity and possibility, and the way that ‘possible worlds semantics’ has been used in theories of meaning. We will focus on the logic of necessity and possibility (modal logic), times (temporal logic), conditionality and dependence (counterfactuals), and the notions of analyticity and a priority so important to much of philosophy.</p>
<p>In the second part of the subject, we examine closely the assumption that every statement we make is either true or false but not both. We will examine the paradoxes of truth (like the so-called ‘liar paradox’) and vagueness (the ‘sorites paradox’), and we will investigate different ways attempts at resolving these paradoxes by going beyond our traditional views of truth (using ‘many valued logics’) or by defending the traditional perspective.</p>
<p>The subject serves as an introduction to ways that logic is applied in the study of language, epistemology and metaphysics, so it is useful to those who already know some philosophy and would like to see how logic relates to those issues. It is also useful to those who already know some logic and would like to learn new logical techniques and see how these techniques can be applied.</p>
<p>The subject is offered to University of Melbourne undergraduate students (for Arts students as a part of the Philosophy major, for non-Arts students, as a breadth subject). Details for enrolment are <a href="https://handbook.unimelb.edu.au/view/2017/PHIL20030">here</a>.</p>
<figure>
<img src="http://consequently.org/images/peter-rozsa-small.png" alt="Rosza Peter">
<figcaption>The writing down of a formula is an expression of our joy that we can answer all these questions by means of one argument. — Rózsa Péter, Playing with Infinity</figcaption>
</figure>
<p>I make use of video lectures I have made <a href="http://vimeo.com/album/2470375">freely available on Vimeo</a>. If you’re interested in this sort of thing, I hope they’re useful. Of course, I appreciate any constructive feedback you might have.</p>
<h3 id="outline">Outline</h3>
<p>The course is divided into four major sections and a short prelude. Here is a list of all of the videos, in case you’d like to follow along with the content.</p>
<h4 id="classical-logic">Classical Logic</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/71195118">On Logic and Philosophy</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71196826">Classical Logic—Models</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71200032">Classical Logic—Tree Proofs</a></li>
</ul>
<h4 id="meaning-sense-reference">Meaning, Sense, Reference</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/71206884">Reference and Compositionality</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71226471">Sense and Reference</a></li>
</ul>
<h4 id="basic-modal-logic">Basic Modal Logic</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/71556216">Introducing Possibility an Necessity</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71558401">Models for Basic Modal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71558696">Tree Proofs for Basic Modal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71560394">Soundness and Completeness for Basic Modal Logic</a></li>
</ul>
<h4 id="normal-modal-logics">Normal Modal Logics</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/72135540">What Are Possible Worlds?</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72137443">Conditions on Accessibility Relations</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72137856">Equivalence Relations, Universal Relations and S5</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72139085">Tree Proofs for Normal Modal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72140275">Applying Modal Logics</a></li>
</ul>
<h4 id="double-indexing">Double Indexing</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/72140275">Temporal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72143616">Actuality and the Present</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72266887">Two Dimensional Modal Logic</a></li>
</ul>
<h4 id="conditionality">Conditionality</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/74494229">Strict Conditionals</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74498276"><em>Ceteris Paribus</em> Conditionals</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74504639">Similarity</a></li>
</ul>
<h4 id="three-values">Three Values</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/74628150">More than Two Truth Values</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74636384">K3</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74680756">Ł3</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74680954">LP</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74682689">RM3</a></li>
</ul>
<h4 id="four-values">Four Values</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/74685077">FDE: Relational Evaluations</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74685986">FDE: Tree Proofs</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74695340">FDE: Routley Evaluations</a></li>
</ul>
<h4 id="paradoxes">Paradoxes</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/76045884">Truth and the Liar Paradox</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76049193">Fixed Point Construction</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76055233">Curry’s Paradox</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76057722">The Sorites Paradox</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76061452">Fuzzy Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76066245">Supervaluationism</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76070423">Epistemicism</a></li>
</ul>
<h4 id="what-to-do-with-so-many-logical-systems">What to do with so many logical systems</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/76070953">Logical Monism and Pluralism</a></li>
</ul>
Conditionals in Closed Set Logic
http://consequently.org/news/2017/closed-set-logic/
Sat, 22 Jul 2017 14:23:43 +1100http://consequently.org/news/2017/closed-set-logic/<p>Over the last couple of days on Twitter, I was <a href="https://twitter.com/sigfpe/status/887754687318966272">involved in a thread</a>, kicked off by <a href="https://twitter.com/sigfpe">Dan Piponi</a>, discussing closed set logic—the natural dual of intuitionistic logic in which the law of the excluded middle holds but the law of non-contradiction fails, and which has models in the closed sets of any topological space, as opposed to the open sets, which model intuitionistic logic.</p>
<p>\(\def\ydash{\succ}\)This logic also has a nice sequent calculus in which sequents have one premise (or zero) and multiple conclusions. In the thread I made the claim that this is a natural and beautiful sequent calculus (it is!) but that the structure of the sequents means that the logic doesn’t have a natural conditional. The <em>dual</em> to the conditional (subtraction) can be defined, for which \(A\ydash B\lor C\) if and only if \(A-B\ydash C\). But the traditional conditional rules don’t work so well.</p>
<p>I realised, when I thought about it a bit more, that this fact is something I’ve just believed for the last 20 years or so, but I’ve never seen written down, so now is as good as a time, and here is as good as a place as any to explain what I mean.</p>
<p></p>
<p>Consider the conditional rules in the classical sequent calculus. They look something like this (give or take variations in the presentation, all equivalent given the classical structural rules):</p>
<figure>
<img src="http://consequently.org/images/classical-conditional-rules.png" alt="Classical Sequent Rules for the Conditional">
<figcaption>Classical sequent rules for the conditional.</figcaption>
</figure>
<p>If we restrict these rules to multiple conclusion <em>single premise</em> sequents, we get rules which look like these:</p>
<figure>
<img src="http://consequently.org/images/closed-set-conditional-rules.png" alt="closed set logic Sequent Rules for the Conditional">
<figcaption>Closed set logic sequent rules for the conditional.</figcaption>
</figure>
<p>You can see an immediate issue with the [\(\supset\)<em>R</em>*] rule. The concluding sequent is \(\ydash A\supset B,Y\) which tells you when a conditional (with alternate conclusion cases) is derivable from <em>no</em> premises. It does not tell you anything else about when a conditional (with alternate conclusion cases) is derivable from another premise. It does not tell us what to do if we want to derive \(C\ydash A\supset B,Y\) in any case where the \(C\) is doing some logical work. The best guidance we get is to ignore the \(C\) and to hope that we can derive \(\ydash A\supset B,Y\). (In classical logic, that’s fine, because we could stash the \(C\) premise away as an alternate conclusion \(\neg C\) among the \(Y\)s in the right hand side, but in an asymmetric sequent calculus like this, that’s not necessarily within our powers.)</p>
<p>The fact that the rules seem too weak to constrain arbitrary sequents of the form \(C\ydash A\supset B,Y\) gives us a hint that these rules might not be strong enough to actually <em>characterise</em> or <em>uniquely define</em> the connective \(\supset\). And that hint bears out when you attempt to derive uniqueness. Here’s the issue. Imagine that you and I both use rules like these to define a conditional connective. Yours is \(\supset_1\) and mine is \(\supset_2\). Try to derive the sequent \(p\supset_1 q\ydash p\supset_2 q\) and you’ll see that you get stuck:</p>
<figure>
<img src="http://consequently.org/images/attempted-identity-derivation.png" alt="Attempted derivation of an identity sequent">
<figcaption>Attempted derivation of an identity sequent.</figcaption>
</figure>
<p>You get stuck at just the point where we’d like to know when \(p\supset_2 q\) follows from other premises, and our rules give us no guidance at all. So, it looks as if our rules are not uniquely characterising in this sequent calculus.</p>
<p>That’s just a suspicion. It’d be nice to have a demonstration of this fact—an explanation of how it is that these rules could be interpreted in different, incompatible ways.</p>
<p>Here’s <em>one</em> way to show that the single premise / multiple conclusion conditional rules do not define a unique connective in closed set logic. We’ll use very simple algebras that are known to model closed set logic. Finite total orders. To be concrete, we’ll interpret propositions as taking values from some given subset of the interval \([0,1]\), at least including \(0\) and \(1\), so each formula \(A\) will take some value \(a\) in that set of values, and we’ll interpret a sequent \(A\ydash B_1,\ldots,B_n\) as saying that \(a\le \max(b_1,\ldots,b_n)\), which amounts to saying that \(a\le b_i\) for some \(i\). And similarly, \(\ydash B_1,\ldots,B_n\) amounts to \(1\le b_i\) for some \(i\). A sequent holds if the value of the left hand formula (or \(1\), if the formula is absent) is less than or equal to the value of one of the right hand formulas. (If the language has conjunction and disjunction, you can interpret them as \(\min\) and \(\max\) respectively, and the top and bottom values are \(0\) and \(1\).)</p>
<p>(What has this to do with closed set logic? An \(n+1\) valued algebra corresponds to the closed sets in the topological space of an \(n\)-element totally ordered set where the closure of a set is its upwards closure in the ordering. \(0\) is the empty set and \(1\) is the whole space.)</p>
<p>Now, look at what the sequent rules for a conditional mean in this setting. Collapsing the finite set \(Y\) in the rules to a single formula \(C\) for simplicity’s sake (without any loss of generality), [\(\supset\)<em>R</em>*] tells us that if \(a\le\max(b,c)\) then \(1\le\max(a\supset b,c)\). That is, if \(a\le b\) or \(a\le c\) then either \(a\supset b=1\) or \(c=1\). This is to hold for all values for \(a\), \(b\) and \(c\). A little bit of algebraic manipulation shows that this is equivalent to saying that when \(a\le b\) or \(a\lt 1\) then \(a\supset b=1\).</p>
<p>And [\(\supset\)<em>L</em>*] tells us that if \(1\le \max(a,c)\) and \(b\le c\) then \(a\supset b\le c\) for all \(a\), \(b\) and \(c\). That is, if either \(1\le a\) or \(1\le c\) and \(b\le c\) then \(a\supset b\le c\). Some more manipulation tells shows that this holds if and only if \(1\supset b\le b\).</p>
<p>So, [\(\supset\)<em>L</em>*] and [\(\supset\)<em>R</em>*] are satisfied in our order models when we have</p>
<ul>
<li>\(1\supset b\le b\)</li>
<li>If \(a\le b\) or \(a\lt 1\) then \(a\supset b=1\)</li>
</ul>
<p>And this is enough to fix <em>many</em> of the values of \(a\supset b\), but it is nowhere near enough to fix all of them. They do tell us that \(a\supset b=1\) whenever \(a\lt 1\), and also, when \(a=b=1\). But the values for \(1\supset b\) are less constrained. For example these rules are satisfied by setting \(1\supset b =b\) for all \(b\). And they’re also satisfied by setting \(1\supset 1 = 1\) and \(1\supset b=0\) when \(b\lt 1\). Provided that there’s at least one extra value between \(0\) and \(1\) in the ordering, that gives us wiggle room.</p>
<figure>
<img src="http://consequently.org/images/two-conditional-tables.png" alt="Two truth tables for conditionals">
<figcaption>Two truth tables for conditionals.</figcaption>
</figure>
<p>No wonder we couldn’t show that \(p\supset_1 q\) entails \(p\supset_2 q\)! In this case (when \(p\) takes the value \(1\) and \(q\) takes the value \(\frac{1}{2}\)) that inference takes us from \(\frac{1}{2}\) to \(0\). That sequent isn’t valid.</p>
<p><em>That’s</em> what I meant when I said that the conditional rules were not so good in closed set logic. The rules tell us something about conditionals, but they are not specific or strong enough to characterise a single concept.</p>Typesetting Flow Graphs with tikz
http://consequently.org/news/2017/typesetting-flow-graphs/
Tue, 11 Jul 2017 11:17:18 +1100http://consequently.org/news/2017/typesetting-flow-graphs/<p>In a few <a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">recent</a> <a href="http://consequently.org/writing/cfss2dml/">papers</a> <a href="http://consequently.org/presentation/2017/a-category-of-classical-proofs-tacl/">and</a> <a href="http://consequently.org/presentation/2017/proof-identity-invariants-and-hyperintensionality/">talks</a>, I’ve been using <em>flow graphs</em> to display the flow of information in proofs. These are the kinds of things that are easy to draw, but they’re not so straightforward to typeset.</p>
<p></p>
<p>Here’s an example:
<figure>
<img src="http://consequently.org/images/flowgraph.jpg" alt="a flow graph on a natural deduction proof">
<figcaption>A flow graph on a natural deduction proof.</figcaption>
</figure></p>
<p>They’re not easy to typset, because they’re an overlay over a proof. The arrows indicate the flow of information inside a proof.</p>
<p>As an aside, I love the flow graph on this natural deduction proof because the “action at a distance” nature of the disjunction elimination step is called out by those two sweeping blue and green arcs—the \(q\) and \(r\) assumptions are discharged by the appeal to the disjunctive conclusion \(q\lor r\). These are the only non-local informational connections in the proof. Each other arc in the flow graph is local, from a premise to a conclusion.</p>
<p>Now, typesetting these things is not straightforward, because the locations of the arrows are defined by typesetting the underlying proof (here, the black text) and the coloured arcs are typeset on top. How do you do that? And how do you do that in an algorithmic and structural way, focussing on the structure and not hand positioning each of the lines?</p>
<p>Thankfully, the tools to typset flow graphs are readily available, at least if you use <a href="http://tug.org">LaTeX</a>. I’ve written up a little document explaining how to do this, and the document and source are now <a href="https://github.com/consequently/flowgraphs">available on Github</a> for you to use as you see fit. If you’ve got any questions, feedback or recommendations for how to extend the technique, please don’t hesitate to get in touch.</p>
<p>I hope it’s helpful.</p>
<ul>
<li><a href="https://github.com/consequently/flowgraphs">https://github.com/consequently/flowgraphs</a></li>
</ul>A Concrete Category of Classical Proofs
http://consequently.org/presentation/2017/a-category-of-classical-proofs-tacl/
Thu, 22 Jun 2017 00:00:00 UTChttp://consequently.org/presentation/2017/a-category-of-classical-proofs-tacl/<p><em>Abstract</em>: I show that the cut-free proof terms defined in my paper “<a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">Proof Terms for Classical Derivations</a>” form a well-behaved category. I show that the category is not Cartesian—and that we’d be wrong to expect it to be. (It has no products or coproducts, nor any initial or final objects. Nonetheless, it is quite well behaved.) I show that the term category is star autonomous (so it fits well within the family of categories for multiplicative linear logic), with internal monoids and comonoids taking care of weakening and contraction. The category is enriched in the category of semilattices, as proofs are closed under the blend rule (also called <em>mix</em> in the literature).</p>
<p>This is an invited address, for <a href="http://www.cs.cas.cz/tacl2017/">TACL 2017</a>, in Prague.</p>
<ul>
<li>The <a href="http://consequently.org/slides/a-category-of-classical-proofs-tacl.pdf">slides are available here</a>.</li>
</ul>
A Category of Classical Proofs
http://consequently.org/presentation/2017/a-category-of-classical-proofs-logicmelb/
Thu, 18 May 2017 00:00:00 UTChttp://consequently.org/presentation/2017/a-category-of-classical-proofs-logicmelb/<p><em>Abstract</em>: I show that the cut-free proof terms defined in my paper “<a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">Proof Terms for Classical Derivations</a>” form a well-behaved category. The talk is intended to be accessible enough for those who don’t know any category theory to follow along. I show that the category is not Cartesian – and that we’d be wrong to expect to be. It has no products or coproducts, nor any initial or final objects. Nonetheless, it is quite well behaved.</p>
<p>I show that the term category is <em>star autonomous</em> (so it fits well within the family of categories for multiplicative linear logic), with internal <em>monoids</em> and <em>comonoids</em> taking care of weakening and contraction. The category is enriched in the category of semilattices, as proofs are closed under the <em>blend</em> rule (also called “mix” in the literature).</p>
<p>This is a talk presented at the <a href="http://blogs.unimelb.edu.au/logic/logic-seminar/">Melbourne Logic Seminar</a>.</p>
<ul>
<li>The <a href="http://consequently.org/slides/a-category-of-classical-proofs-logicmelb.pdf">slides are available here</a>.</li>
</ul>
Proof Identity, Invariants and Hyperintensionality
http://consequently.org/presentation/2017/proof-identity-invariants-and-hyperintensionality/
Thu, 23 Feb 2017 00:00:00 UTChttp://consequently.org/presentation/2017/proof-identity-invariants-and-hyperintensionality/<p><em>Abstract</em>: This talk is a comparison of how different approaches to hyperintensionality, aboutness and subject matter treat (classically) logically equivalent statements. I compare and contrast two different notions of subject matter that might be thought to be representational or truth first – <em><a href="https://www.amazon.com/Aboutness-Carl-G-Hempel-Lecture/dp/0691144958/consequentlyorg">Aboutness</a></em> (Princeton University Press, 2014), and truthmakers conceived of as situations, as discussed in my “<a href="http://consequently.org/writing/ten/">Truthmakers, Entailment and Necessity</a>.” I contrast this with the kind of inferentialist account of hyperintensionality arising out of the <em>proof invariants</em> I have explored <a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">in recent work</a>.</p>
<p>This is a talk presented at the <a href="http://projects.illc.uva.nl/conceivability/Events/event/38/Hyperintensionality-proof-theory-and-semantics">Hyperintensionality Afternoon</a>, held by Francesco Berto’s project on the <a href="http://projects.illc.uva.nl/conceivability/">Logic of Conceivability</a>.</p>
<ul>
<li>The <a href="http://consequently.org/slides/proof-identity-invariants-and-hyperintensionality.pdf">slides are available here</a>.</li>
</ul>
Proof Terms for Classical Derivations
http://consequently.org/presentation/2017/proof-terms-invariants-talk-amsterdam/
Thu, 23 Feb 2017 00:00:00 UTChttp://consequently.org/presentation/2017/proof-terms-invariants-talk-amsterdam/<p><em>Abstract</em>: I give an account of proof terms for derivations in a sequent calculus for classical propositional logic. The term for a derivation \(\delta\) of a sequent \(\Sigma \succ\Delta\) encodes how the premises \(\Sigma\) and conclusions \(\Delta\) are related in \(\delta\). This encoding is many–to–one in the sense that different derivations can have the same proof term, since different derivations may be different ways of representing the same underlying connection between premises and conclusions. However, not all proof terms for a sequent \(\Sigma\succ\Delta\) are the same. There may be different ways to connect those premises and conclusions.</p>
<p>Proof terms can be simplified in a process corresponding to the elimination of cut inferences in sequent derivations. However, unlike cut elimination in the sequent calculus, each proof term has a unique normal form (from which all cuts have been eliminated) and it is straightforward to show that term reduction is strongly normalising—every reduction process terminates in that unique normal form. Further- more, proof terms are invariants for sequent derivations in a strong sense—two derivations \(\delta_1\) and \(\delta_2\) have the same proof term if and only if some permutation of derivation steps sends \(\delta_1\) to \(\delta_2\) (given a relatively natural class of permutations of derivations in the sequent calculus). Since not every derivation of a sequent can be permuted into every other derivation of that sequent, proof terms provide a non-trivial account of the identity of proofs, independent of the syntactic representation of those proofs.</p>
<p>This is a talk presented at the <a href="https://www.illc.uva.nl/lgc/seminar/2017/02/lira-session-greg-restall/">LIRa Seminar</a>, at the University of Amsterdam</p>
<ul>
<li>The <a href="http://consequently.org/slides/proof-terms-invariants-talk-amsterdam.pdf">slides are available here</a>.</li>
</ul>
Logical Pluralism: Meaning, Rules and Counterexamples
http://consequently.org/presentation/2017/logical-pluralism-meaning-counterexamples/
Wed, 22 Feb 2017 00:00:00 UTChttp://consequently.org/presentation/2017/logical-pluralism-meaning-counterexamples/<p><em>Abstract</em>: I attempt to give a <em>pluralist</em> and <em>syntax-independent</em> account of classical and constructive <em>proof</em>, grounded in univocal rules for evaluating assertions and denials for judgments featuring the logical connectives, interpretable as governing warrants <em>for</em> and <em>against</em> claims, and which results in an interpretation of classical and constructive counterexamples to invalid arguments.</p>
<p>This is a talk presented at the <a href="Logical Pluralism: Meaning, Rules and Counterexamples">Pluralisms Workshop</a>, hosted at the University of Bonn, March 2-4, 2017.</p>
<ul>
<li>The <a href="http://consequently.org/slides/logical-pluralism-meaning-counterexamples.pdf">slides are available here</a>.</li>
</ul>
Fixed Point Models for Theories of Properties and Classes
http://consequently.org/writing/fixed-point-models/
Mon, 02 May 2016 00:00:00 UTChttp://consequently.org/writing/fixed-point-models/<p>There is a vibrant (but minority) community among philosophical logicians seeking to resolve the paradoxes of classes, properties and truth by way of adopting some non-classical logic in which trivialising paradoxical arguments are not valid. There is also a long tradition in theoretical computer science–going back to Dana Scott’s fixed point model construction for the untyped lambda-calculus–of models allowing for fixed points. In this paper, I will bring these traditions closer together, to show how these model constructions can shed light on what we could hope for in a non-trivial model of a theory for classes, properties or truth featuring fixed points.</p>
UNIB10002: Logic, Language and Information
http://consequently.org/class/2017/unib10002/
Thu, 23 Feb 2017 00:00:00 UTChttp://consequently.org/class/2017/unib10002/<p><strong><span class="caps">UNIB10002</span>: Logic, Language and Information</strong> is a <a href="http://unimelb.edu.au">University of Melbourne</a> undergraduate breadth subject, introducing logic and its applications to students from a broad range of disciplines in the Arts, Sciences and Engineering. I coordinate this subject with my colleagues Dr. Shawn Standefer, with help from Prof. Lesley Stirling (Linguistics), Dr. Peter Schachte (Computer Science) and Dr. Daniel Murfet (Mathematics).</p>
<p>The subject is taught to University of Melbourne undergraduate students. Details for enrolment are <a href="https://handbook.unimelb.edu.au/view/2017/UNIB10002">here</a>. We teach this in a ‘flipped classroom’ model, using resources from our Coursera subjects <a href="http://consequently.org/class/2015/logic1_coursera">Logic 1</a> and <a href="http://consequently.org/class/2015/logic2_coursera">Logic 2</a>.</p>
PHIL30043: The Power and Limits of Logic
http://consequently.org/class/2017/phil30043/
Thu, 23 Feb 2017 00:00:00 UTChttp://consequently.org/class/2017/phil30043/
<p><strong><span class="caps">PHIL30043</span>: The Power and Limits of Logic</strong> is a <a href="https://handbook.unimelb.edu.au/view/2017/PHIL30043">University of Melbourne undergraduate subject</a>. It covers the metatheory of classical first order predicate logic, beginning at the <em>Soundness</em> and <em>Completeness</em> Theorems (proved not once but <em>twice</em>, first for a tableaux proof system for predicate logic, then a Hilbert proof system), through the <em>Deduction Theorem</em>, <em>Compactness</em>, <em>Cantor’s Theorem</em>, the <em>Downward Löwenheim–Skolem Theorem</em>, <em>Recursive Functions</em>, <em>Register Machines</em>, <em>Representability</em> and ending up at <em>Gödel’s Incompleteness Theorems</em> and <em>Löb’s Theorem</em>.</p>
<figure>
<img src="http://consequently.org/images/godel.jpg" alt="Kurt Godel, seated">
<figcaption>Kurt Gödel, seated</figcaption>
</figure>
<p>The subject is taught to University of Melbourne undergraduate students (for Arts students as a part of the Philosophy major, for non-Arts students, as a breadth subject). Details for enrolment are <a href="https://handbook.unimelb.edu.au/view/2016/PHIL30043">here</a>. I make use of video lectures I have made <a href="http://vimeo.com/album/2262409">freely available on Vimeo</a>.</p>
<h3 id="outline">Outline</h3>
<p>The course is divided into four major sections and a short prelude. Here is a list of all of the videos, in case you’d like to follow along with the content.</p>
<h4 id="prelude">Prelude</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/59401942">Logical Equivalence</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59403292">Disjunctive Normal Form</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59403535">Why DNF Works</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59463569">Prenex Normal Form</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59466141">Models for Predicate Logic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59880539">Trees for Predicate Logic</a></li>
</ul>
<h4 id="completeness">Completeness</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/59883806">Introducing Soundness and Completeness</a></li>
<li><a href="http://vimeo.com/album/2262409/video/60249309">Soundness for Tree Proofs</a></li>
<li><a href="http://vimeo.com/album/2262409/video/60250515">Completeness for Tree Proofs</a></li>
<li><a href="http://vimeo.com/album/2262409/video/61677028">Hilbert Proofs for Propositional Logic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/61685762">Conditional Proof</a></li>
<li><a href="http://vimeo.com/album/2262409/video/62221512">Hilbert Proofs for Predicate Logic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/103720089">Theories</a></li>
<li><a href="http://vimeo.com/album/2262409/video/103757399">Soundness and Completeness for Hilbert Proofs for Predicate Logic</a></li>
</ul>
<h4 id="compactness">Compactness</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/63454250">Counting Sets</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63454732">Diagonalisation</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63454732">Compactness</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63455121">Non-Standard Models</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63462354">Inexpressibility of Finitude</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63462519">Downward Löwenheim–Skolem Theorem</a></li>
</ul>
<h4 id="computability">Computability</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/64162062">Functions</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64167354">Register Machines</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64207986">Recursive Functions</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64435763">Register Machine computable functions are Recursive</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64604717">The Uncomputable</a></li>
</ul>
<h4 id="undecidability-and-incompleteness">Undecidability and Incompleteness</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/65382456">Deductively Defined Theories</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65392670">The Finite Model Property</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65393543">Completeness</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65440901">Introducing Robinson’s Arithmetic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65442289">Induction and Peano Arithmetic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65443650">Representing Functions and Sets</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65483655">Gödel Numbering and Diagonalisation</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65497886">Q (and any consistent extension of Q) is undecidable, and incomplete if it’s deductively defined</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65498016">First Order Predicate Logic is Undecidable</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65501745">True Arithmetic is not Deductively Defined</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65505372">If Con(PA) then PA doesn’t prove Con(PA)</a></li>
</ul>
First Degree Entailment, Symmetry and Paradox
http://consequently.org/writing/fde-symmetry-paradox/
Fri, 14 Oct 2016 00:00:00 UTChttp://consequently.org/writing/fde-symmetry-paradox/<p>Here is a puzzle, which I learned from Terence Parsons in his paper “True Contradictions”. First Degree Entailment (FDE) is a logic which allows for truth value gaps as well as truth value gluts. If you are agnostic between assigning paradoxical sentences gaps and gluts (and there seems to be no very good reason to prefer gaps over gluts or gluts over gaps if you are happy with FDE), then this looks no different, in effect, from assigning them a gap value? After all, on both views you end up with a theory that doesn’t commit you to the paradoxical sentence or its negation. How is the FDE theory any different from the theory with gaps alone?</p>
<p>In this paper, I will present a clear answer to this puzzle—an answer that explains how being agnostic between gaps and gluts is a genuinely different position than admitting gaps alone, by using the formal notion of a bi-theory, and showing that while such positions might agree on what is to be accepted, they differ on what is to be rejected.</p>
With Gratitude to Raymond Smullyan
http://consequently.org/news/2017/with-gratitude-to-smullyan/
Mon, 13 Feb 2017 22:12:50 AEDThttp://consequently.org/news/2017/with-gratitude-to-smullyan/<p>While I was busy writing my most recent paper, “<a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">Proof Terms for Classical Derivations</a>”, I heard that <a href="https://www.nytimes.com/2017/02/11/us/raymond-smullyan-dead-puzzle-creator.html?smid=tw-share">Raymond Smullyan had died at the age of 97</a>. I <a href="https://t.co/g5e54e0eo6">posted a tweet</a> with a photo of a page from the draft of the paper I was writing at the time, expressing loss at hearing of his death and gratitude for his life.</p>
<p>There are many reasons to love Professor Smullyan. I learned combinatory logic from his delightful puzzle book <em><a href="https://www.amazon.com/Mock-Mockingbird-Raymond-Smullyan/dp/0192801422/consequentlyorg">To Mock a Mockingbird</a></em>, and he was famous for many more puzzle books like that. He was not only bright and sharp, he was also <a href="https://www.amazon.com/Tao-Silent-Raymond-M-Smullyan/dp/0060674695/consequentlyorg">warmly</a> <a href="https://www.amazon.com/Who-Knows-Study-Religious-Consciousness/dp/0253215749/">humane</a>. However, the focus of my gratitude was something else. In my tweet, I hinted at one reason why I’m especially grateful for Smullyan’s genius—his deep understanding of proof theory. I am convinced that his analysis of inference rules in the tableaux system for classical logic rewards repeated reflection. (See his <em><a href="https://www.amazon.com/First-Order-Logic-Dover-Books-Mathematics/dp/0486683702/consequentlyorg">First-Order Logic</a></em>, Chapter 2, Section 1 for details.) I’ll try to explain why it’s important and insightful here.</p>
<p></p>
<p><blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">In memory of Raymond Smullyan (1919-2017), with appreciation, fondness, and a sense of loss. <a href="https://t.co/g5e54e0eo6">pic.twitter.com/g5e54e0eo6</a></p>— Greg Restall (@consequently) <a href="https://twitter.com/consequently/status/829517048346705921">February 9, 2017</a></blockquote> <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>Step back a moment and think about <em>proof theory</em>, that branch of logic which concentrates—unlike model theory—on the <em>positive</em> definition of the core logical notions of validity, inconsistency, etc. An argument is valid if and only if <em>there is some</em> proof from the premises to the conclusion. A set of sentences is inconsistent if and only if <em>there is some</em> refutation of (i.e., proof of a contradiction from) that set of sentences. On the contrary, model theoretic approaches define those notions negatively. An argument is valid if and only if <em>there is no</em> model satisfying the premises but failing to satisfy the conclusion; a set of sentences is inconsistent if <em>there is no</em> model satisfying all of them. For proof theory to be precise, we need to know what counts as a proof. The way this is typically done in different accounts of proof (whether <a href="https://www.amazon.com/Natural-Deduction-Proof-Theoretical-Study-Mathematics/dp/0486446557/consequentlyorg">natural deduction</a>, Gentzen’s <a href="https://www.amazon.com/Theory-Cambridge-Theoretical-Computer-Science/dp/0521779111/consequentlyorg">sequent</a> <a href="https://www.amazon.com/Structural-Proof-Theory-Professor-Negri/dp/0521068428/consequentlyorg">calculus</a>, or <a href="https://www.amazon.com/First-Order-Logic-Dover-Books-Mathematics/dp/0486683702/consequentlyorg">tableaux</a>), there are different rules for each different logical connective or quantifier. In well behaved proof systems, there tend to be two rules for each connective, explaining what you can deduce <em>from</em> (for example) a conjunction, and how you could make a deduction <em>to</em> a conjunction. The same for a conditional, a disjunction, a negation, a universally quantified statement, and so on.</p>
<p>That makes for a <em>lot</em> of different rules.</p>
<p>A <em>proof</em> is then is some structured collection of statements, linked together in ways specified by the rules. You demonstrate things <em>about</em> proofs typically by showing that the feature you want to prove holds for <em>basic</em> proofs (the smallest possible cases), and then you show that if the property holds for a proof, it also holds for a proof you can make out of that one by extending it by a new inference step. If you have \(9\) different rules, then there are \(9\) different cases to check. Worse than that, if you were <em>mad enough</em> to try to <a href="http://consequently.org/writing/proof-terms-for-classical-derivations">prove something about what happens when you rearrange proofs</a> by swapping inference steps around, then welcome to the world of combinatorial explosion of cases. If you have \(9\) different kinds of rules, then there are \(9 \times 9 = 81\) different cases you have to consider. There’s something inherently unsatisfying about having to consider \(81\) different cases in a proof. You have the nagging feeling that you’re not looking at this at the right level of complexity. There is no wonder that the insightful and influential proof theorist <a href="http://iml.univ-mrs.fr/~girard/Accueil.html">Jean-Yves Girard</a> complained:</p>
<blockquote>
<p>One can see … technical limitations in current proof-theory: The lack in <em>modularity</em>: in general, neigbouring problems can be attacked by neighbouring methods; but it is only exceptionally that one of the problems will be a corollary of the other … Most of the time, a completely new proof will be necessary (but without any new idea). This renders work in the domain quite long and tedious. For instance, if we prove a cut-elimination theorem for a certain system of rules, and then consider a new system including just a new pair of rules, then it is necessary to make a complete new proof from the beginning. Of course 90% of the two proofs will be identical, but it is rather shocking not to have a reasonable «modular» approach to such a question: a main theorem, to which one could add various «modules» corresponding to various directions. Maybe this is inherent in the subject; one may hope that this only reflects the rather low level of our conceptualization!</p>
<p>— <a href="https://www.amazon.com/Proof-Theory-Logical-Complexity-Studies/dp/0444987150/consequentlyorg">Proof Theory and Logical Complexity</a>, pages 15 and 16.</p>
</blockquote>
<p>In the case of permutations of rules, the usual proof of a theorem like this would have \(n\times n\) cases where you have a proof system with \(n\) different inference rules. And if you decided to try to extend your result it to a proof system with another \(m\) rules, you not only need to prove the fact all over again for your new rules, you also need to one-by-one add the \(n\times m\) cases of interaction betwen the old rules and your new ones. Ugh.</p>
<p>That’s where Smullyan’s insight comes in. He divided the rules of his tableaux system for classical propositional logic into two kinds. The \(\alpha\) (linear) rules are single-premise single-conclusion rules, while the \(\beta\) (branching) rules infer a conclusion from <em>two</em> premises. It turns out that you can prove very many things about rules operating at this level of generality. Many features of rules are shared by <em>all</em> \(\alpha\) rules or <em>all</em> \(\beta\) rules. And in <a href="http://consequently.org/writing/proof-terms-for-classical-derivations">my paper</a> I was pleased to see that the \(81\) different cases of permutations I had to consider could be simplified into \(3\) different cases. Swapping an \(\alpha\) step around an \(\alpha\) step; a \(\beta\) around a \(\beta\) and an \(\alpha\) around a \(\beta\) (and back). Instead of writing a paper where I considered \(n\) different cases out of \(81\), and leave the rest to the reader, using Smullyan’s insight I could show that any rules of the required two general shapes can be permuted around using the general schemas I formulate. Every case is covered. And what’s more, if you extend the proof system with <em>other</em> rules, provided that they are \(\alpha\) or \(\beta\) rules, the results still hold. It’s a much better way to do proof theory. It’s a <em>modular</em> proof of a theorem, in just the way that Girard hoped for.</p>
<p>Thanks, Professor Smullyan!</p>A New Paper
http://consequently.org/news/2017/a-new-paper/
Mon, 13 Feb 2017 01:32:58 AEDThttp://consequently.org/news/2017/a-new-paper/<p>It’s a new year, and it’s time for a new paper, so here is “<a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">Proof Terms for Classical Derivations</a>” I’ve been working on these ideas for about a year, from some <a href="http://consequently.org/presentation/2016/terms-for-classical-sequents-logicmelb/">rough</a> <a href="http://consequently.org/presentation/2016/terms-for-classical-sequents-gothenburg/">talks</a> <a href="http://consequently.org/presentation/2016/terms-for-classical-sequents-aal-2016/">over</a> <a href="http://consequently.org/presentation/2016/what-proofs-are-about/">most</a> <a href="http://consequently.org/presentation/2016/proof-terms-invariants/">of</a> 2016, to many conversations with my colleague <a href="http://standefer.weebly.com">Shawn</a> as I attempted to iron out the details, to many more hours in front of whiteboards, I’ve finally got something I’m happy to show in public.</p>
<p>The paper still rough, but the ideas are all there, and I think the theorems are all correct. The paper is under 50 pages—but only just! It proposes a new account of proof terms for classical propositional logic. These proof terms give a new account of what it is for one sequent derivation to represent the “same underlying proof” as another. Two derivations represent the same proof if and only if they have the same proof term. In the paper I show that two derivations have the same proof term if and only if one can be permuted into the other, using a natural class of transformations of derivations. Finally, I show that cut elimination for proof terms is confluent and strongly normalising, giving a new account of what it is to <em>evaluate</em> a classical proof, in a way that does not collapse into triviality.</p>
<p>Here’s an example from the paper, of three derivations, with the same concluding proof term:</p>
<figure>
<img src="http://consequently.org/images/three-derivations.jpg" alt="three derivations with the same profo term">
<figcaption>Three derivations with the same proof term</figcaption>
</figure>
<p>If this looks interesting to you, please <a href="http://consequently.org/writing/proof-terms-for-classical-derivations/">take a look</a>. I’d appreciate your feedback. Thanks!</p>
Proof Terms for Classical Derivations
http://consequently.org/writing/proof-terms-for-classical-derivations/
Sun, 12 Feb 2017 00:00:00 UTChttp://consequently.org/writing/proof-terms-for-classical-derivations/<p>I give an account of proof terms for derivations in a sequent calculus for classical propositional logic. The term for a derivation \(\delta\) of a sequent \(\Sigma \succ\Delta\) encodes how the premises \(\Sigma\) and conclusions \(\Delta\) are related in \(\delta\). This encoding is many–to–one in the sense that different derivations can have the same proof term, since different derivations may be different ways of representing the same underlying connection between premises and conclusions. However, not all proof terms for a sequent \(\Sigma\succ\Delta\) are the same. There may be different ways to connect those premises and conclusions.</p>
<p>Proof terms can be simplified in a process corresponding to the elimination of cut inferences in sequent derivations. However, unlike cut elimination in the sequent calculus, each proof term has a unique normal form (from which all cuts have been eliminated) and it is straightforward to show that term reduction is strongly normalising—every reduction process terminates in that unique normal form. Further- more, proof terms are invariants for sequent derivations in a strong sense—two derivations \(\delta_1\) and \(\delta_2\) have the same proof term if and only if some permutation of derivation steps sends \(\delta_1\) to \(\delta_2\) (given a relatively natural class of permutations of derivations in the sequent calculus). Since not every derivation of a sequent can be permuted into every other derivation of that sequent, proof terms provide a non-trivial account of the identity of proofs, independent of the syntactic representation of those proofs.</p>
Proof Terms as Invariants
http://consequently.org/presentation/2016/proof-terms-invariants/
Thu, 08 Dec 2016 00:00:00 UTChttp://consequently.org/presentation/2016/proof-terms-invariants/<p>This is a talk on proof theory for <a href="http://philevents.org/event/show/27034">Melbourne Logic Day</a>.</p>
<p>Abstract: In this talk, I will explain how proof terms for derivations in classical propositional logic are invariants for derivations under a natural class of permutations of rules. The result is two independent characterisations of one underlying notion of proof identity.</p>
<ul>
<li>The <a href="http://consequently.org/slides/proof-terms-invariants-logicday-2016.pdf">slides</a> are available.</li>
</ul>
A Puzzle for Brandom's Account of Singular Terms
http://consequently.org/news/2016/a-puzzle-for-bob/
Wed, 30 Nov 2016 11:22:42 AEDThttp://consequently.org/news/2016/a-puzzle-for-bob/<p>I’ve been interested in <a href="http://www.pitt.edu/~rbrandom/">Robert Brandom</a>’s inferentialism since I picked up a copy of <em><a href="https://www.amazon.com/Making-Explicit-Representing-Discursive-Commitment/dp/0674543300/consequentlyorg">Making it Explicit</a></em> back in 1996. One interesting component of Brandom’s inferentialism is his account of what it is to be a singular term. There are a number of ways to understand inferentialism, but the important point here is the centrality of <em>material inference</em> to semantics. An inference like “Melbourne is south of Sydney, therefore Sydney is north of Melbourne” is a materially good inference. Material inferences, for Brandom, are not to be understood as grounded in a more primitive notion of logical consequence—we shouldn’t explain the inference in terms of the validity of the form “\(a\) is south of \(b\), for all \(x\) and \(y\) if \(x\) is south of \(y\) then \(y\) is north of \(x\), therefore, \(b\) is north of \(a\)” and the fact that the extra premise is common knowledge or a part of the norms governing the concepts of north and south. No, according the inferentialist, we are to explain those facts in terms of materially good inferences, and not <em>vice versa</em>.</p>
<p>Well, one of the distinctive features of Brandom’s inferentialism is that he takes there to be an inferentialist account of what it is for a term to be a <em>singular term</em>—a name or other device that picks out an <em>object</em>, rather than a <em>predicate</em> that describes something, or some other kind of connective or modifier.</p>
<p>Here’s a one sentence slogan summarising the account of what it is to be a singular term:</p>
<blockquote>
<p>A grammatical item is a <em>singular term</em> if and only if the <em>substitution inferences</em> in which that item is <em>materially involved</em> are <em>symmetric</em>.</p>
</blockquote>
<p>(See Brandom’s <em><a href="https://www.amazon.com/Articulating-Reasons-Inferentialism-Robert-Brandom/dp/0674006925/consequentlyorg">Articulating Reasons</a></em>, Chapter 4, especially Section II for details and exposition.)</p>
<p>There are at least three complex concepts in this slogan that require explanation:</p>
<ol>
<li><strong>Substitution inferences</strong>: A substitution inference involving a term \(t\) is an inference from a sentence using \(t\) to a sentence found by replacing the occurrences if \(t\) in the sentence with some other term of the same grammatical type. For example, the inference from “Greg is a philosophical logician” to “Greg is a philosopher” is a substitution inference involving “philosophical logician”—the term “philosopher” is substituted is in place of “philosophical logician.” The inference “Greg is a philosophical logician” to “The author of this note is a philosophical logician” is also a substitution inference.</li>
<li><strong>Material involvement</strong>: A term is <em>materially involved</em> in an inference if it cannot be replaced without altering the status of the inference.</li>
<li><strong>Symmetric</strong>: An inference from <em>A</em> to <em>B</em> is (materially) symmetric if and only if whenever that inference is materially good, so is the converse, from <em>B</em> to <em>A</em>.</li>
</ol>
<p>There’s something insightful about this. The inference from “Greg is a philosophical logician” to “The author of this note is a philosophical logician” is materially good (at least, in some contexts), if that’s good so is the converse inference. Why? Because I (Greg) am the author of this note. But the inference from “Greg is a philosophical logician” to “Greg is a philosopher” is good in the way that the converse need not be. There are clearly asymmetric material inferences resulting from the substitution of weaker predicates for stronger predicates. There don’t seem to be anything “weaker” or “stronger” singular terms. What would such things be? It really looks like there is something important going on in the difference between substitution of singular terms and the substitution of predicates (or predicate modifiers, or other grammatical units) in these inferences.</p>
<p>However, I am struck by the following puzzle. Consider an inference like this:</p>
<blockquote>
<p>23 is a small number, therefore 22 is small number.</p>
</blockquote>
<p>I take this is a materially good inference. Whatever standard of smallness you invoke, if 23 counts as a small number, so does 22. Why? Because 22 is smaller than 23.</p>
<p>In fact, each inference of the form:</p>
<blockquote>
<p>\(m\) is a small number, therefore \(n\) is a small number.</p>
</blockquote>
<p>Looks materially good to me, for any numerals \(m\) and \(n\) where \(n\) names a larger number than \(m\) does. Because, as before, \(m\) is <em>indeed</em> smaller than \(n\), and in those cases, the inference is good.</p>
<p>However, the converse inferences seem nowhere near as good. While <em>some</em> of the converse inferences might be good (I have some <a href="http://davewripley.rocks">friends</a> who take it that every inference of the form <em>\(m\) is small, so \(m+1\) is small</em> is good), but you shouldn’t think that <em>all</em> of them are good. If you can find a number \(n\) that is small and a larger number \(m\) that is <em>not</em> small, then the converse inference</p>
<blockquote>
<p>\(n\) is a small number, therefore \(m\) is a small number.</p>
</blockquote>
<p>is not only materially bad—it has a true premise and a false conclusion. It’s as bad as an argument can get.</p>
<p>This looks to me to be a clear counterexample to Brandom’s account of singular terms. Here’s why.</p>
<ol>
<li>Numerals really do look like they are singular terms. (In the formalised language of mathematics, we treat numerals as singular terms. It’s a natural thing to do.) While numbers aren’t the same sorts of objects as objects we can see or touch, measure or weigh, the terms certainly seem to act like singular terms.</li>
<li>The inferences from \(n\) <em>is small</em> to \(m\) <em>is small</em> (where \(m\) is smaller than \(n\)) really do seem to be materially good. If we’re going to rule these out</li>
<li>The inference from \(n\) <em>is small</em> to \(m\) <em>is small</em> looks for all the world like a substitution inference where \(n\) is replaced by \(m\). It would be a strange grammatical analysis to take it to not be a substitution inference.</li>
<li>The term \(n\) appears materially in the inference from <em>\(n\) is small</em> to <em>\(m\) is small</em>. (If this inference is valid, replace \(n\) by something smaller than \(m\) to make the resulting inference invalid. Conversely if the inference is invalid, we replace \(n\) by some number smaller than \(m\), to find a valid inference.) I can’t see any way to understand material involvement if this is not a case of it.</li>
<li>These inferences, though materially valid, are not all symmetric, unless either <em>no</em> number is small or <em>every</em> number is. But that’s to make a nonsense of our concepts of “small.”</li>
</ol>
<p>Despite appearances, this has nothing to do with the sorites paradox, or to do with the context sensitivity of “small.” We could have run the same argument with the inference</p>
<blockquote>
<p>23 is smaller than \(N\), therefore 22 is smaller than \(N\).</p>
</blockquote>
<p>where \(N\) is some fixed (possibly known, possibly unknown) number, and the result would have been the same. All inferences</p>
<blockquote>
<p>\(n\) is smaller than \(N\), therefore \(m\) is smaller than \(N\).</p>
</blockquote>
<p>are materially good, in those cases where \(m\) is smaller than \(n\). And obviously, some of the converse inferences are bad.</p>
<p>(We could do the same with more prosaic examples, too. The inference from “Wellington is south of Melbourne” to “Wellington is south of Sydney” seems materially good, while the converse inference seems much less compelling.)</p>
<p>I <em>think</em> this means that Brandom’s account can’t work as it stands, unless I’ve misunderstood it. Even though there aren’t <em>general</em> inferential asymmetries between singular terms, there are <em>local</em> asymmetries, relative to particular substitution inferences. Any account of the distinctive behaviour of singular terms will need to paint the distinction somewhere other than the symmetry or asymmetry of <em>all</em> substitution inferences.</p>
<p>What do <em>you</em> think?</p>
<p>(Thanks to <a href="http://standefer.weebly.com">Shawn Standefer</a> and Kai Tanter for conversations that prompted these thoughts.)</p>
Existence, Definedness and the Semantics of Possibility and Necessity
http://consequently.org/presentation/2016/existence-definedness-awpl-tplc/
Sun, 02 Oct 2016 00:00:00 UTChttp://consequently.org/presentation/2016/existence-definedness-awpl-tplc/<p>I’m giving a talk entitled “Existence, Definedness and the Semantics of Possibility and Necessity” at a <a href="http://www.philo.ntu.edu.tw/lmmgroup/?mode=events_detail&n=1">Workshop on the Philosophy of Timothy Williamson at the Asian Workshop in Philosophical Logic and the Taiwan Philosophical Logic Colloquium</a> at the National Taiwan University.</p>
<p>Abstract: In this talk, I will address just some of Professor Williamson’s treatment of necessitism in his <em>Modal Logic as Metaphysics</em>. I will give an account of what space might remain for a principled and logically disciplined contingentism. I agree with Williamson that those interested in the metaphysics of modality would do well to take quantified modal logic—and its semantics—seriously in order to be clear, systematic and precise concerning the commitments we undertake in adopting an account of modality and ontology. Where we differ is in how we present the semantics of that modal logic. I will illustrate how <em>proof theory</em> may play a distinctive role in elaborating a quantified modal logic, and in the development of theories of meaning, and in the metaphysics of modality.</p>
<ul>
<li>The <a href="http://consequently.org/slides/existence-definedness-awpl-tplc-2016.pdf">slides are available here</a>.</li>
<li>The talk is based on a larger paper, <a href="http://consequently/writing/existence-definedness/">Existence and Definedness</a>.</li>
</ul>
Existence and Definedness: the semantics of possibility and necessity
http://consequently.org/writing/existence-definedness/
Sat, 02 Oct 2010 00:00:00 UTChttp://consequently.org/writing/existence-definedness/<p>In this paper, I will address just some of Professor Williamson’s treatment of necessitism in his <em>Modal Logic as Metaphysics</em>. I will give an account of what space might remain for a principled and logically disciplined contingentism. I agree with Williamson that those interested in the metaphysics of modality would do well to take quantified modal logic—and its semantics—seriously in order to be clear, systematic and precise concerning the commitments we undertake in adopting an account of modality and ontology. Where we differ is in how we present the semantics of that modal logic. I will illustrate how <em>proof theory</em> may play a distinctive role in elaborating a quantified modal logic, and in the development of theories of meaning, and in the metaphysics of modality.</p>
<p>The paper was first written for presentation in a <a href="http://www.philo.ntu.edu.tw/lmmgroup/?mode=events_detail&n=1">workshop on Tim Williamson’s work, at the Asian Workshop in Philosophical Logic and the Taiwan Philosophical Logic Colloquium</a> at the National Taiwan University. The slides from that talk are <a href="http://consequently.org/presentation/2016/existence-definedness-awpl-tplc">available here</a>.</p>
Proof Terms are fun
http://consequently.org/news/2016/proof-terms-are-fun/
Fri, 02 Sep 2016 17:35:21 +1100http://consequently.org/news/2016/proof-terms-are-fun/<p>Today, between <a href="http://consequently.org/class/2016/PHIL20030/">marking assignments</a> and <a href="https://twitter.com/logicmelb/status/771115105282961410">working through a paper on proof theory for counterfactuals</a>, I’ve been playing around with <a href="http://consequently.org/presentation/2016/terms-for-classical-sequents-aal-2016/">proof terms</a>. They’re a bucketload of fun. The derivation below generates a proof term for the sequent \(\forall xyz(Rxy\land Ryz\supset Rxz),\forall xy(Rxy\supset Ryx),\forall x\exists y Rxy \succ \forall x Rxx\). The playing around is experimenting with different ways to encode the <em>quantifier</em> steps in proof terms. I think I’m getting somewhere with this. (But boy, typesetting these things is <em>not</em> easy.)</p>
<p><blockquote class="instagram-media" data-instgrm-captioned data-instgrm-version="7" style=" background:#FFF; border:0; border-radius:3px; box-shadow:0 0 1px 0 rgba(0,0,0,0.5),0 1px 10px 0 rgba(0,0,0,0.15); margin: 1px; max-width:658px; padding:0; width:99.375%; width:-webkit-calc(100% - 2px); width:calc(100% - 2px);"><div style="padding:8px;"> <div style=" background:#F8F8F8; line-height:0; margin-top:40px; padding:49.9074074074% 0; text-align:center; width:100%;"> <div style=" background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAsCAMAAAApWqozAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAMUExURczMzPf399fX1+bm5mzY9AMAAADiSURBVDjLvZXbEsMgCES5/P8/t9FuRVCRmU73JWlzosgSIIZURCjo/ad+EQJJB4Hv8BFt+IDpQoCx1wjOSBFhh2XssxEIYn3ulI/6MNReE07UIWJEv8UEOWDS88LY97kqyTliJKKtuYBbruAyVh5wOHiXmpi5we58Ek028czwyuQdLKPG1Bkb4NnM+VeAnfHqn1k4+GPT6uGQcvu2h2OVuIf/gWUFyy8OWEpdyZSa3aVCqpVoVvzZZ2VTnn2wU8qzVjDDetO90GSy9mVLqtgYSy231MxrY6I2gGqjrTY0L8fxCxfCBbhWrsYYAAAAAElFTkSuQmCC); display:block; height:44px; margin:0 auto -44px; position:relative; top:-22px; width:44px;"></div></div> <p style=" margin:8px 0 0 0; padding:0 4px;"> <a href="https://www.instagram.com/p/BJ1QhRCD7ex/" style=" color:#000; font-family:Arial,sans-serif; font-size:14px; font-style:normal; font-weight:normal; line-height:17px; text-decoration:none; word-wrap:break-word;" target="_blank">A whiteboard-to-LaTeX scanner would be really handy right about now. Anybody have one?</a></p> <p style=" color:#c9c8cd; font-family:Arial,sans-serif; font-size:14px; line-height:17px; margin-bottom:0; margin-top:8px; overflow:hidden; padding:8px 0 7px; text-align:center; text-overflow:ellipsis; white-space:nowrap;">A photo posted by Greg Restall (@consequently) on <time style=" font-family:Arial,sans-serif; font-size:14px; line-height:17px;" datetime="2016-09-01T23:42:54+00:00">Sep 1, 2016 at 4:42pm PDT</time></p></div></blockquote> <script async defer src="//platform.instagram.com/en_US/embeds.js"></script></p>
Proofs and what they’re good for
http://consequently.org/presentation/2016/proofs-and-what-theyre-good-for-aap-2016/
Thu, 18 Aug 2016 00:00:00 UTChttp://consequently.org/presentation/2016/proofs-and-what-theyre-good-for-aap-2016/<p>I’m giving a talk entitled “Proofs and what they’re good for” at the <a href="http://philevents.org/event/show/24658">University of Melbourne Philosophy Seminar</a> on Thursday, August 25, 2016.</p>
<p>Abstract: I present a new account of the nature of proof, with the aim of explaining how proof could actually play the role in reasoning that it does, and answering some long-standing puzzles about the nature of proof, including (1) how it is that a proof transmits warrant (2) Lewis Carroll’s dilemma concerning Achilles and the Tortoise and the coherence of questioning basic proof rules like modus ponens, and (3) how we can avoid logical omniscience without committing ourselves to inconsistency.</p>
<ul>
<li>The <a href="http://consequently.org/slides/proofs-and-what-theyre-good-for-slides-melb.pdf">slides</a> and <a href="http://consequently.org/handouts/proofs-and-what-theyre-good-for-handout-melb.pdf">handout</a> are available.</li>
</ul>
What Proofs and Truthmakers are About
http://consequently.org/presentation/2016/what-proofs-are-about/
Tue, 05 Jul 2016 00:00:00 UTChttp://consequently.org/presentation/2016/what-proofs-are-about/<p>I was originally scheduled to give a talk entitled “What Proofs are About” at the <a href="http://philevents.org/event/show/24086"><em>About Aboutness</em> Workshop</a> at the University of Melbourne on Saturday, July 16, 2016, but my plane back to Melbourne was delayed and I didn’t get to present the paper then.</p>
<p>So, I’m presenting it at the <a href="http://blogs.unimelb.edu.au/logic/logic-seminar/">Melbourne Logic Seminar</a> instead.</p>
<p><em>Abstract</em>: This talk is a comparison of how three different approaches to subject matter treat some pairs of statements that say <em>different things</em> but are (classically) logically equivalent. The pairs are</p>
<ol>
<li>\(p\lor\neg p\) and \(\top\)</li>
<li>\(p\lor(p\land q)\) and \(p\)</li>
<li>\((p\lor\neg p)\lor(q\lor\neg q)\) and \((p\lor\neg p)\land(q\lor\neg q)\).</li>
</ol>
<p>I compare and contrast the notion of subject matter introduced in Stephen Yablo’s <em><a href="https://www.amazon.com/Aboutness-Carl-G-Hempel-Lecture/dp/0691144958/consequentlyorg">Aboutness</a></em> (Princeton University Press, 2014), truthmakers conceived of as situations, as discussed in my “<a href="http://consequently.org/writing/ten/">Truthmakers, Entailment and Necessity</a>,” and the <em>proof invariants</em> I have explored <a href="http://consequently.org/presentation/2016/terms-for-classical-sequents-aal-2016/">in recent work</a>.</p>
<ul>
<li>The <a href="http://consequently.org/slides/what-proofs-are-about.pdf">slides are available here</a>.</li>
</ul>
First Degree Entailment, Symmetry and Paradox
http://consequently.org/news/2016/fde-symmetry-and-paradox/
Wed, 27 Jul 2016 00:36:40 +1100http://consequently.org/news/2016/fde-symmetry-and-paradox/<p>Talking to <a href="http://entailments.net">Jc Beall</a> during his recent visit to Australia, I got thinking about <em>first degree entailment</em> again.</p>
<p>Here is a puzzle, which I learned from Terence Parsons in his “<a href="http://www.jstor.org/stable/40231701">True Contradictions</a>”. <em>First Degree Entailment</em> (<span class="caps">fde</span>) is a logic which allows for truth value <em>gaps</em> as well as truth value <em>gluts</em>. If you are agnostic between assigning paradoxical sentences gaps and gluts (and there seems to be no very good reason to prefer gaps over gluts or gluts over gaps if you’re happy with <span class="caps">fde</span>), then this looks no different, in effect, from assigning them a <em>gap</em> value? After all, on both views you end up with a theory that doesn’t commit you to the paradoxical sentence or its negation. How is the <span class="caps">fde</span> theory any different from the theory with gaps alone?</p>
<p>I think I have a clear answer to this puzzle—an answer that explains how being agnostic between gaps and gluts is a genuinely different position than admitting gaps alone. But to explain the answer and show how it works, I need to spell things out in more detail. If you want to see how this answer goes, read on.</p>
<p></p>
<p>First degree entailment (<span class="caps">fde</span>) is a logic well suited to fixed point solutions to the paradoxes. Perhaps it is <em>too</em> well suited, because it allows paradoxical sentences to be evaluated in two distinct ways: Paradoxical sentences can be assigned the value \(n\) (neither true nor false: \(\lbrace\rbrace\)) or \(b\) (both true and false—or \(\lbrace 0,1\rbrace\)) equally well. Are two possible values better than one? And more importantly, is agnosticism between <em>which</em> value to assign a paradoxical sentence like the liar—a stance Terence Parsons calls “agnostaletheism”—any different from assigning it the truth value \(n\) instead of \(b\)? After all, on either stance, neither the liar sentence nor its negation are to be accepted. In this note, I explore the symmetry that is available in <span class="caps">fde</span>, and I show how agnostaletheism may be clearly distinguished from the view according to which paradoxes are simply neither true nor false.</p>
<h3 id="fde-and-relational-evaluations">FDE and Relational Evaluations</h3>
<p><em>First Degree Entailment</em> (<span class="caps">fde</span>) is a simple and elegant logic, well suited to many different applications. It can be defined and understood in a number of different ways, but for our purposes it suits to introduce it as the generalisation of classical two-valued logic according to which evaluations are no longer functions assigning each sentence of a language a truth value from \(\lbrace 0,1\rbrace\), but relations to those truth values. Relaxing the constraint that evaluations be Boolean functions means that sentences can be be <em>neither</em> true nor false (the evaluation fails to relate the sentence to either \(0\) or \(1\)) or <em>both</em> true and false (the evaluation relates the sentence to both truth values). This generalisation allows us to interpret the suite of connectives and quantifiers of predicate logic in a straightforward manner, generalising the traditional evaluation conditions due to Boole and Tarski as follows.</p>
<p>\(\def\semv#1{{[\![}#1{]\!]}}\)
Given a non-empty domain \(D\) of objects, an <span class="caps">fde</span>-model for a language consists of a multi-sorted relation \(\rho\) defined as follows: For each \(n\)-place predicate \(F\), \(\rho_F\) relates \(n\)-tuples of objects from \(D\) to the truth values \(0,1\). For each constant \(c\) in the language, \(\rho_c\) selects a unique object from \(D\). An assignment \(\alpha\) of values to the variables is a function from those variables to the domain \(D\). Given an assignment \(\alpha\) and the interpretation \(\rho\) we define the semantic value \(\semv{t}_{\rho,\alpha}\) of a term \(t\) to be given by \(\rho_t\) if \(t\) is a name and \(\alpha(t)\) if \(t\) is a variable. Then, relative to each assignment \(\alpha\) we define the relation \(\rho_\alpha\) which matches formulas in the language to truth values as follows:</p>
<ul>
<li>\((Ft_1\cdots t_n)\rho_\alpha i\) iff \(\langle \semv{t_1}_{\rho,\alpha},\ldots,\semv{t_n}_{\rho,\alpha}\rangle\rho_F i\)</li>
<li>\((A\land B)\rho_\alpha 1\) iff \(A\rho_\alpha 1\) and \(B\rho_\alpha 1\)</li>
<li>\((A\land B)\rho_\alpha 0\) iff \(A\rho_\alpha 0\) or \(B\rho_\alpha 0\)</li>
<li>\((A\lor B)\rho_\alpha 1\) iff \(A\rho_\alpha 1\) or \(B\rho_\alpha 1\)</li>
<li>\((A\lor B)\rho_\alpha 0\) iff \(A\rho_\alpha 0\) and \(B\rho_\alpha 0\)</li>
<li>\(\neg A\rho_\alpha 1\) iff \(A\rho_\alpha 0\)</li>
<li>\(\neg A\rho_\alpha 0\) iff \(A\rho_\alpha 1\)</li>
<li>\((\forall x)A\rho_\alpha 1\) iff \(A\rho_{\alpha[x:=d]} 1\) for every \(d\) in \(D\).</li>
<li>\((\forall x)A\rho_\alpha 0\) iff \(A\rho_{\alpha[x:=d]} 0\) for some \(d\) in \(D\).</li>
<li>\((\exists x)A\rho_\alpha 1\) iff \(A\rho_{\alpha[x:=d]} 1\) for some \(d\) in \(D\).</li>
<li>\((\exists x)A\rho_\alpha 0\) iff \(A\rho_{\alpha[x:=d]} 0\) for every \(d\) in \(D\).</li>
</ul>
<p>The only deviation from classical first order predicate logic is that we allow for truth value gaps (\(\rho\) may fail to relate a given formula to a truth value) or gluts (\(\rho\) may relate a given formula to both truth values). Indeed, these features are, in a sense, <em>modular</em>. It is straightforward to show that if a given interpretation \(\rho\) is a partial function on the basic vocabulary of a language (if it never over-assigns values to the extension of any predicate in that language), then it remains so over every sentence in that language. Those sentences can be assigned gaps, but no gluts. Similarly, if an interpretation is decisive over the basic vocabulary of some language (it never under-assigns values to the extensions of any predicate in that language), then it remains so over every sentence of that language. These sentences can be assigned gluts, but no gaps. If an evaluation is <em>sharp</em> (if it allows for neither gaps nor gluts in the interpretation of any predicate), then it remains so over the whole language.</p>
<p>Relational evaluations are a natural model for <span class="caps">fde</span>. They show it to be an elementary generalisation of classical logic, allowing for gaps between truth values and over-assignment of those values. The interpretation of the connectives and the quantifiers remains as classical as in two-valued logic, except for the generalisation to allow for gaps and gluts between the two semantic values.</p>
<h3 id="fde-and-four-values">FDE and four values</h3>
<p>We can also see <span class="caps">fde</span> in another light, not as a logic allowing for gaps and gluts between two truth values, but as a logic allowing for <em>four</em> semantic values. For clarity, we will write these four values: \(t\), \(b\), \(n\) and \(f\). We can translate between the two-valued and four-valued languages as follows. Given a relational valuation \(\rho\) we define a functional valuation \(v\) which assigns</p>
<ul>
<li>\(v(A)=t\) iff \(A\rho 1\) but not \(A{\rho} 0\)</li>
<li>\(v(A)=b\) iff \(A\rho 1\) and \(A\rho 0\)</li>
<li>\(v(A)=f\) iff \(A\rho 0\) but not \(A{\rho} 1\)</li>
<li>\(v(A)=n\) iff neither \(A{\rho} 1\) nor \(A{\rho} 0\)</li>
</ul>
<p>It follows then, that</p>
<ul>
<li>\(A\rho 1\) iff \(v(A)=t\) or \(v(A)=b\), and</li>
<li>\(A\rho 0\) iff \(v(A)=f\) or \(v(A)=b\).</li>
</ul>
<p>Evaluation relations that are partial functions can be understood as functional evaluations taking semantic values from \(t,n,f\) — and the evaluation clauses in this case give us the familiar logic <span class="caps">k3</span>: Kleene’s strong three valued logic. Evaluation relations that are decisive, allowing for no gaps, can be understood as taking semantic values from \(t,b,f\) — and the evaluation clauses in this case give us the familiar logic <span class="caps">lp</span>: Priest’s logic of paradox. In what follows, I will move between functional and relational vocabulary as seems appropriate.</p>
<p>\(\def\ydash{\succ}\)</p>
<h3 id="fde-and-sequents">FDE and Sequents</h3>
<p>There are many different ways we can use <span class="caps">fde</span> evaluations to analyse truth and consequence in the language of first order logic. One important notion goes like this: An interpretation \(\rho\) is said to be a <em>counterexample</em> to the sequent \(X\ydash Y\) if and only if \(\rho\) relates each member of \(X\) to \(1\) while it relates no member of \(Y\) to \(1\). In other words, an interpretation provides a counterexample to a sequent if it shows some way that the sequent fails to preserve truth. Given some set \(\mathcal M\) of evaluations, a sequent is said to be \(\mathcal M\)-valid if it has no counterexamples in the set \(\mathcal M\). We reserve the term <span class="caps">fde</span>-valid for those sequents which have no counterexamples at all. A sequent is said to be <span class="caps">k3</span>-valid if it has no counterexamples among partial function evaluations, and a sequent is said to be <span class="caps">lp</span>-valid if it has no counterexamples among decisive valuations.</p>
<p>All this is very well known in the literature on non-classical logics—see, for example (Priest <a href="https://www.amazon.com/Introduction-Non-Classical-Logic-Introductions-Philosophy/dp/0521670268/consequentlyorg">2008, Chapter 8</a>) for details. The <span class="caps">fde</span> validities include all of distributive lattice logic with a de Morgan negation. Sequents such as these
\[
\neg (A\land B)\ydash \neg A\lor \neg B\quad
\neg (A\lor B)\ydash \neg A\land \neg B
\]
\[
\neg A\lor \neg B\ydash \neg (A\land B)\quad
\neg A\land \neg B\ydash \neg (A\lor B)
\]
\[
A\ydash\neg\neg A \quad \neg\neg A\ydash A
\]
are <span class="caps">fde</span> valid. The next sequents are not valid in <span class="caps">fde</span>, but they are valid in <span class="caps">k3</span>:
\[
{A,\neg A\ydash~}\quad
A\lor B,\neg A\ydash B
\]
In both cases, an <span class="caps">fde</span> interpretation which relates \(A\) to both \(0\) and \(1\) (but which fails to relate \(B\) to \(1\)) serves as a counterexample.
Similarly, the following sequents are <em>not</em> valid in <span class="caps">fde</span>, but they are valid in <span class="caps">lp</span>:
\[
{\ydash~A,\neg A}\quad
B\ydash A\land B,\neg A
\]
In both cases, an <span class="caps">fde</span> interpretation which relates \(A\) to neither \(0\) nor \(1\) (but which relates \(B\) to \(1\)) serves as a counterexample.</p>
<h3 id="fde-theories-and-bitheories">FDE, Theories and Bitheories</h3>
<p>From sequents we move to theories. A the usual definition has it that a <em>theory</em> may be defined as a set of sentences closed under a logical consequence relation. So, given some collection \(\mathcal M\) of interpretations, \(T\) is an \(\mathcal M\)-theory if and only if whenever the sequent \(T\ydash A\) (where \(A\) is a single formula) is \(\mathcal M\)-valid, then \(A\) is a member of \(T\). \(\mathcal M\)-theories contain their own \(\mathcal M\)-consequences. We can think of theories as representing what is held to be true according to a certain stance—a consequences of what is held true is also (implicitly) held true. Elsewhere (Restall 2013) I have argued that in logics like <span class="caps">fde</span> we have good reason to consider not only what is held true, but what is held *un*true. Sequents give us a straightforward vocabulary for describing this. We say that the disjoint pair \(\langle T,U\rangle\) is an \(\mathcal M\)-<em>bitheory</em> if and only if whenever the sequent \(T\ydash A,U\) (where \(A\) is a single formula) is \(\mathcal M\)-valid, then \(A\) is a member of \(T\), and whenever \(T,A\ydash U\) is \(\mathcal M\)-valid, then \(A\) is a member of \(U\). Now, \(\langle T,U\rangle\) is a pair, consisting of what is (according to that bitheory) held true (to be related to \(1\)) on the one hand and what is held untrue (to be unrelated to \(1\)), on the other. Suppose \(\mathcal M’\subseteq\mathcal M\) is another set of interpretations. If we define \(T_{\mathcal M’}\) to be the set of all sentences true (related to \(1\)) in all \(\mathcal M’\)-interpretations and likewise \(U_{\mathcal M’}\) to be the set of all sentences untrue (not related to \(1\)) in those interpretations, then \(\langle T_{\mathcal M’},U_{\mathcal M’}\rangle\) is a \(\mathcal M\)-bitheory. Indeed, if \(\mathcal M’\) is a singleton set, consisting of one interpretation, then the bitheory \(\langle T_{\mathcal M’},U_{\mathcal M’}\rangle\) is a partition of the language, deciding every formula to be either true or untrue. If the set \(\mathcal M’\) is larger, containing interpretations which give a sentence \(A\) different verdicts, then the corresponding bitheory will no longer be a partition. If one interpretation judges \(A\) to be true, another judges it untrue, then \(A\) will neither feature in the left set nor the right set.</p>
<h3 id="fde-and-truth">FDE and Truth</h3>
<p>The puzzle under consideration in this note arises from the behaviour of paradoxical sentences in <span class="caps">fde</span>. The details of the paradoxes are not important to us, but regardless, let’s consider a concrete case, the paradoxes of truth. We will consider a transparent account of truth, so let us focus on first order languages in which we have a one-place predicate \(T\) for truth. Since the truth predicate is a <em>predicate</em>, it will apply to objects in the domain. To allow for fixed points (sentences which ascribe truth or falsity to sentences in the language, including themselves) we extend the language with quotation names for sentences in that very same language. So, for each sentence \(A\) we have a name \(\ulcorner A\urcorner\). Fixed point constructions for truth in the style of Kripke, Brady, Woodruff and Gilmore generate <span class="caps">fde</span>-interpretations for a language in which the sentence \(A\) and the sentence \(T\ulcorner A\urcorner\) are assigned the same semantic values. We will call such interpretations <span class="caps">fde</span>\(^T\) interpretations. The construction method for <span class="caps">fde</span>\(^T\)-interpretations assigns the extension of \(T\) in stages, keeping the rest of the evaluation as given, including the denotation for constants. The details of the proof are not important to us, but one essential idea is useful: the notion of <em>preservation</em> between evaluations. For two evaluations \(\rho\) and \(\rho’\), we have \(\rho\sqsubseteq\rho’\) if and only if whenever \(\rho\) relates an atomic formula to a given truth value \(0\) or \(1\), so does \(\rho’\). It is a straightforward induction on the complexity of formulas that this then extends to all of the formulas in the language: for any formula \(A\), if \(A\rho 0\) then \(A\rho’ 0\) too, and if \(A\rho 1\) then \(A\rho’ 1\) too. The evaluations \(\rho\) and \(\rho’\) may still differ, because \(\rho\) might leave a <em>gap</em> where \(\rho’\) fills in a value, \(0\) or \(1\), or where \(\rho\) assigned only one value, \(\rho’\) might assign <em>both</em>.</p>
<p>The only requirement on quotation names for this fixed point construction to succeed is that quotation names for different sentences are different. This means that the construction will work <em>whatever</em> we take the denotation of other constants to be. So, let’s consider a language with a countable supply of constants \(\lambda,\lambda_1,\lambda_2,\ldots\) whose denotation can be freely set however we please.</p>
<p>So <span class="caps">fde\(^T\)</span> is the set of relational <span class="caps">fde</span> evaluations for this language in which \(T\) is a fixed point—that is, for any sentence \(A\), that sentence receives the same evaluation as \(T\ulcorner A\urcorner\). <span class="caps">fde\(^T\)</span> can also be considered as a theory (or bitheory), if we wish to consider what holds (and fails to hold) in all such evaluations. We can do the same for <span class="caps">k3\(^T\)</span> and <span class="caps">lp\(^T\)</span>, when we restrict our attention to evaluations in which there are no truth value gluts or gaps respectively. Kripke’s original construction shows us how to make <span class="caps">k3\(^T\)</span> evaluations, and the construction generalises to <span class="caps">lp\(^T\)</span> and <span class="caps">fde\(^T\)</span> straightforwardly.</p>
<p>Now, to consider the behaviour of the paradoxical sentences, let’s fix the referent of the term \(\lambda\) to be the same as the referent of the quotation name \(\ulcorner\neg T\lambda\urcorner\), containing the term \(\lambda\) itself. It follows then that \(T\lambda\) has the same value as \(T\ulcorner\neg T\lambda\urcorner\), which has the same value as \(\neg T\lambda\). \(\lambda\) denotes a liar sentence, which says of itself that it’s not true. That is, the sentence \(\neg T\lambda\) (and its mate, \(T\lambda\)) must be assigned the value \(b\) or \(n\), since it is a fixed point for negation. The fixed point construction allows us to generate interpretations for the truth predicate in which sentences like \(\neg T\lambda\) have the value \(n\), and interpretations where those sentences have the value \(b\)—in fact, one can make the fixed point construction purely in <span class="caps">k3</span> or in <span class="caps">lp</span>—and there are also mixed models in which some paradoxical sentences have the value \(n\) and others the value \(b\).</p>
<p>So, if we take <span class="caps">fde\(^T\)</span> to be an adequate <em>logic</em> of truth, then it seems as if we should be agnostic about whether a liar sentence like \(\neg T\lambda\) has value \(n\) or \(b\), unless we can find some consideration which breaks the tie between them. This position was named ``agnostaletheism”, by Terence Parsons (<a href="http://www.jstor.org/stable/40231701">1990</a>).</p>
<p>Perhaps there <em>is</em> a tie-breaking consideration. If we were to be agnostic between assigning \(\neg T\lambda\) the value \(b\) and the value \(n\), this looks a lot like assigning the value \(n\). After all, according to both theories, we don’t assert \(T\lambda\) and we don’t assert its negation. This is the puzzling question: Is there an instability in <span class="caps">fde\(^T\)</span>? Does <span class="caps">fde\(^T\)</span> collapse into <span class="caps">k3\(^T\)</span>?</p>
<h3 id="symmetry-in-fde-theories">Symmetry in FDE Theories</h3>
<p>The profound symmetry between gaps and gluts in first degree entailment is manifest in the behaviour of the Routley star—a function on evaluations—introduced by Richard and Valerie Routley in the 1970s (Routley and Routley 1972). Given an evaluation \(\rho\), we can define its <em>dual</em> evaluation \(\rho^\ast\) as follows:</p>
<p>For each \(n\)-place predicate \(F\), we set:</p>
<ul>
<li>\(\langle d_1,\ldots,d_n\rangle\rho^\ast_F 1\) holds iff \(\langle d_1,\ldots,d_n\rangle\rho_F 0\) doesn’t hold.</li>
<li>\(\langle d_1,\ldots,d_n\rangle\rho^\ast_F 0\) holds iff \(\langle d_1,\ldots,d_n\rangle\rho_F 1\) doesn’t hold.</li>
</ul>
<p>In other words, an atomic formula is <em>true</em> according to \(\rho{^\ast}\) if and only if to \(\rho\) it is not false, and it is <em>false</em> according to \(\rho{^\ast}\) if and only if to \(\rho\) it is not true. This means that atomic formulas which are \(t\) by \(\rho\)’s lights are also \(t\) by \(\rho^\ast\)’s, and similarly for \(f\). But a formula that is \(n\) according to \(\rho\) is \(b\) to \(\rho^\ast\), and a formula that is \(b\) according to \(\rho\) is \(n\) to \(\rho^\ast\). The dual evaluation turns gaps into gluts, and gluts into gaps, for atomic formulas.</p>
<p>This fact generalises to all of the formulas in the language of <span class="caps">fde</span>.</p>
<p><span class="caps">fact</span>: For any formula \(A\) in the language of <span class="caps">fde</span></p>
<ul>
<li>\(A\rho^\ast 1\) holds iff \(A\rho 0\) doesn’t hold.</li>
<li>\(A\rho^\ast 0\) holds iff \(A\rho 1\) doesn’t hold.</li>
</ul>
<p>This fact is established by a simple induction on the complexity of the formula \(A\). The crucial feature of the connectives that makes this proof work is the balance between the positive and negative conditions in an evaluation \(\rho\). For example, with conjunction we have</p>
<ul>
<li>\((A\land B)\rho_\alpha 1\) iff \(A\rho_\alpha 1\) and \(B\rho_\alpha 1\)</li>
<li>\((A\land B)\rho_\alpha 0\) iff \(A\rho_\alpha 0\) or \(B\rho_\alpha 0\)</li>
</ul>
<p>So we can proceed as follows (assuming that the fact holds for the simpler formulas \(A\) and \(B\)), \((A\land B)\rho^\ast_\alpha 1\) iff \(A\rho^\ast_\alpha 1\) and \(B\rho^\ast_\alpha 1\) iff \(A\rho_\alpha 0\) doesn’t hold and \(B\rho_\alpha 0\) doesn’t hold, iff neither \(A\rho_\alpha 0\) nor \(B\rho_\alpha 0\) hold, iff \(A\land B\rho_\alpha 0\) doesn’t hold. We have appealed to the parallel between these two clauses:</p>
<ul>
<li>\((A\land B)\rho_\alpha 1\) holds iff \(A\rho_\alpha 1\) and \(B\rho_\alpha 1\) don’t hold.</li>
<li>\((A\land B)\rho_\alpha 0\) doesn’t hold iff \(A\rho_\alpha 0\) and \(B\rho_\alpha 0\) don’t hold.</li>
</ul>
<p>In the same way, for example, with the existential quantifier:</p>
<ul>
<li>\((\exists x)A\rho_\alpha 1\) holds iff \(A\rho_{\alpha[x:=d]} 1\) holds for some \(d\) in \(D\).</li>
<li>\((\exists x)A\rho_\alpha 0\) doesn’t hold iff \(A\rho_{\alpha[x:=d]} 0\) doesn’t hold for some \(d\) in \(D\).</li>
</ul>
<p>and the same form of argument applies. What holds for the existential quantifier and conjunction holds for the other connectives and quantifier of first degree entailment.</p>
<p><em>Excursus</em>: This argument would fail if we had connectives or quantifiers in our language whose truth and falsity conditions are less well matched. For example, we could have a connective which is conjunctive with regard to truth and disjunctive with regard to falsity:</p>
<ul>
<li>\((A\times B)\rho_\alpha 1\) iff \(A\rho_\alpha 1\) and \(B\rho_\alpha 1\)</li>
<li>\((A\times B)\rho_\alpha 0\) iff \(A\rho_\alpha 0\) and \(B\rho_\alpha 0\)</li>
</ul>
<p>Given a evaluation \(\rho\) which relates the atomic formulas \(p\) to \(1\) only and \(q\) to \(0\) only, \(\rho^\ast\) does the same. According to both \(\rho\) and \(\rho^\ast\), \(p\times q\) is related to neither \(1\) nor \(0\), breaking the symmetry between gaps and gluts. <em>End of Excursus</em></p>
<p>The Routley star sends relational evaluations to relational evaluations. It does not send theories to theories. It is natural to define the star of a set of sentences as follows: For any set \(S\) of formulas, \(A\in S^\ast\) if and only if \(\neg A\not\in S\). However, the dual \(T^\ast\) of a theory \(T\)is not always itself a theory. Take, for example, the <span class="caps">fde</span>-theory \((\neg p\lor\neg q){\mathord\downarrow}\) consisting of all <span class="caps">fde</span>-consequences of \(\neg p\lor\neg q\) (it is the theory consisting of every sentence made true by by every evaluation \(\rho\) where either \(p\rho 0\) or \(q\rho 0\)). In particular, we have \(\neg p\lor\neg q\in (\neg p\lor\neg q){\mathord\downarrow}\) but \(\neg p\not\in (\neg p\lor\neg q){\mathord\downarrow}\) and \(\neg q\not\in (\neg p\lor\neg q){\mathord\downarrow}\). Now consider the dual set \((\neg p\lor\neg q){\mathord\downarrow}^\ast\). This is not a theory, because \(p\in{(\neg p\lor\neg q){\mathord\downarrow}^\ast}\) (since \(\neg p\not\in(\neg p\lor\neg q){\mathord\downarrow})\)) and \(q\in{(\neg p\lor\neg q){\mathord\downarrow}^\ast}\) (since \(\neg q\not\in(\neg p\lor\neg q){\mathord\downarrow})\)) but the conjunction is not in the set: \(p\land q\not\in{(\neg p\lor\neg q){\mathord\downarrow}^\ast}\) (since \(\neg p\lor\neg q\in(\neg p\lor\neg q){\mathord\downarrow})\) ensures that \(\neg(p\land q)\in(\neg p\lor\neg q){\mathord\downarrow}\) too).</p>
<p>However, it is straightforward to show the following fact, relating the Routley star and <em>bi</em>-theories.</p>
<p><span class="caps">fact</span>: For any \(\mathcal M\)-bitheory \(\langle T,U\rangle\), the pair \(\langle \overline{U^\ast},\overline{T^\ast}\rangle\) is an \(\mathcal M^\ast\)-bitheory, where \(\overline{U^\ast}\) and \(\overline{T^\ast}\) are the sets of sentences <em>not</em> in \(U^\ast\) and \(T^\ast\) respectively.</p>
<p>Here is why: The interpretation \(\rho\) is a counterexample to \(T \ydash U\) has a counterexample iff \(\rho^\ast\) is a counterexample to \(\neg U \ydash \neg T\). It follows that \(\overline{U^\ast}\ydash A,\overline{T^\ast}\) fails at \(\rho^\ast\) iff \(\neg\overline{T^\ast},\neg A\ydash\neg\overline{U^\ast}\) fails at \(\rho\), but that means \(T,\neg A\ydash U\) fails at \(\rho^\ast\). So, \(\overline{U^\ast}\ydash A,\overline{T^\ast}\) holds in \(\mathcal M^\ast\) iff \(T,\neg A\ydash U\) holds in \(\mathcal M\). So, since \(\langle T,U\rangle\) is an \(\mathcal M\)-bitheory, we have \(\neg A\in U\), which means \(A\in \overline{U^\ast}\) as desired. The case for \(\overline{U^\ast},A\ydash \overline{T^\ast}\) to \(A\in\overline{T^\ast}\) is dual.</p>
<p>Armed with these facts concerning the Routley star, we can attend to the behaviour of our theories (and bitheories) with gaps and gluts.</p>
<h3 id="two-kinds-of-incompleteness">Two Kinds of Incompleteness</h3>
<p>Theories in <span class="caps">fde</span> can be incomplete in two distinct ways. Consider the <span class="caps">fde</span>-theory consisting of every sentence true in those evaluations which relate \(p\) to \(1\) and relate \(q\) to neither \(0\) nor \(1\), and which relate \(r\) to either \(1\) or \(0\). This set of sentences contains \(p\) and it doesn’t contain \(\neg p\). It holds \(p\) to be <em>true</em>. However, it is incomplete concerning \(q\) and \(r\)—the theory doesn’t contain \(q\) or \(\neg q\), and it also doesn’t contain \(r\) or \(\neg r\). However, the theory has <em>settled</em> \(q\) to be neither true nor false. (In all of the evaluations, \(q\) receives the value \(n\).) On the other hand, the value of \(r\) is <em>unsettled</em>. In some evaluations, \(r\) is true, in others it is false. In this way, <span class="caps">fde</span> allows for two different kinds of incompleteness.</p>
<p>Now consider theories like <span class="caps">k3\(^T\)</span> and <span class="caps">fde\(^T\)</span>. Recall, <span class="caps">fde\(^T\)</span> is given by all <span class="caps">fde</span> evaluations for which \(T\ulcorner A\urcorner\) and \(A\) receive the same value, and <span class="caps">k3\(^T\)</span> is given by all <span class="caps">k3</span> valuations with the same property. If we focus on the <em>theories</em> determined by each class of valuations, we see that a liar sentence like \(T\lambda\) is <em>incomplete</em> in both theories. In <span class="caps">k3\(^T\)</span>, it is because in any such valuation, \(T\lambda\) receives the value \(n\)—it is never true. In <span class="caps">fde\(^T\)</span> it is because in any such valuation, \(T\lambda\) either receives the value \(n\) or the value \(b\). In some valuations it is true (those where it is \(b\)) and in others, it fails to be true. Again, the theory is incomplete concerning \(T\lambda\).</p>
<p>Is there any way to distinguish these theories or distinguish this incompleteness?</p>
<p>In one sense, the answer will be <em>no</em>. The following fact contains the core of the reason why:</p>
<p><span class="caps">fact</span>: For any <span class="caps">k3</span> evaluation \(\rho\), the theory determined by \(\rho\) and the <span class="caps">fde</span> theory determined by the two evaluations \(\rho\) and \(\rho^\ast\) are identical.</p>
<p>It is easy to see that \(\rho\sqsubseteq \rho^\ast\) in the case where \(\rho\) is a <span class="caps">k3</span> evaluation. It follows that the truths according to \(\rho\) are exactly the truths according to both \(\rho\) and \(\rho^\ast\).</p>
<p>This fact <em>generalises</em>. Consider an evaluation \(\rho\), which may involve both gaps and gluts. We can define the evaluation \(\rho^{n}\), which assigns \(n\) to any atomic formula assigned either \(n\) or \(b\) by \(\rho\), and which leaves \(t\) and \(f\) fixed. It is straightforward to see that \(\rho^{n}\sqsubseteq \rho\). We can also define the evaluation \(\rho^{b}\), which assigns \(b\) to any atomic formula assigned either \(n\) or \(b\) by \(\rho\), and which leaves \(t\) and \(f\) fixed. In this case, we have \(\rho\sqsubseteq \rho^b\). So, in general any <span class="caps">fde</span> evaluation \(\rho\) is sandwiched between a <span class="caps">k3</span> evaluation and an <span class="caps">lp</span> evaluation like so: \(\rho^n\sqsubseteq\rho\sqsubseteq \rho^b\).</p>
<p>The generalisation of our previous fact can now be stated:</p>
<p><span class="caps">fact</span>: For any <span class="caps">fde</span> evaluation \(\rho\), the <span class="caps">k3</span> theory determined by \(\rho^{n}\) and the <span class="caps">fde</span> theory determined by the two evaluations \(\rho\) and \(\rho^{n}\) are identical.</p>
<p>The proof is as before: Now \(\rho^n\sqsubseteq \rho\), so it follows that the truths according to \(\rho^n\) are exactly the truths according to both \(\rho^n\) and \(\rho\).</p>
<p>Now, the operation of sending all gaps and gluts either to <em>gaps</em> or to <em>gluts</em> does not disturb the logic of truth.</p>
<p><span class="caps">fact</span>: If \(\rho\) is an <span class="caps">fde\(^T\)</span> evaluation, then so are \(\rho^n\) and \(\rho^b\).</p>
<p>The only way that \(\rho^n\) could fail to be an an <span class="caps">fde\(^T\)</span> evaluation is if for some formula \(A\), the values in \(\rho^n\) of \(A\) and \(T\ulcorner A\urcorner\) differ. But if values of two formulas differ in \(\rho^n\), they also differ in \(\rho\). (The same holds for \(\rho^b\), too.)</p>
<p>Now we can state our general fact, concerning truth theories in <span class="caps">fde</span> and <span class="caps">k3</span>. The basic idea is that the theories are identical, since theories that take the paradoxical sentences to be \(n\) and those that are agnostic between \(n\) and \(b\) take the same claims to be <em>true</em>. This is fair enough as far as it goes, but stated in this bald way, it does not go very far at all. The theories <span class="caps">fde\(^T\)</span> and <span class="caps">k3\(^T\)</span> <em>obviously</em> have the same theorems—they both have <em>no</em> theorems. The <em>silent</em> evaluation which sends absolutely every every formula to \(n\) is a <span class="caps">k3\(^T\)</span> (and hence, <span class="caps">fde\(^T\)</span>) evaluation, and this shows that both <span class="caps">k3\(^T\)</span> and <span class="caps">fde\(^T\)</span> have no theorems at all. So, merely showing that <span class="caps">k3\(^T\)</span> and <span class="caps">fde\(^T\)</span> share theorems does not say very much at all. We can do much better than this.</p>
<p>Suppose we have a set \(\mathcal M\) of evaluations, such that whenever \(\rho\in\mathcal M\) we also have \(\rho^n\in \mathcal M\). Let \(\mathcal M^n\) be the set of <span class="caps">k3</span> evaluations in \(\mathcal M\)—so \({\mathcal M^n}\) is \(\lbrace\rho^n:\rho\in\mathcal M\rbrace\). We have the following result:</p>
<p><span class="caps">fact</span>: The theory \(T_\mathcal{M}\) of sentences true in all evaluations in \(\mathcal{M}\) is identical to the theory \(T_{\mathcal{M}^n}\), of sentences in all evaluations in \(\mathcal{M}^n\).</p>
<p>Clearly \(T_\mathcal{M}\subseteq T_{\mathcal M^n}\). To show the converse, suppose the formula \(A\) is not in \(T_\mathcal{M}\). So, it fails to be true on some evaluation \(\rho\in\mathcal{M}\). It also fails in \(\rho^n\), which is in \(\mathcal{M^n}\).</p>
<p>So, for example, if we have some <span class="caps">k3</span> valuation \(\rho\) for a language without the truth predicate, and we consider the set \(\mathcal M\) of all <span class="caps">fde\(^T\)</span> evaluations, extending \(\rho\) with a truth predicate. Here, grounded \(T\)-sentences will receive values as determined by the underlying valuation \(\rho\), while other sentences will vary among all four values, \(t\), \(f\), \(n\) and \(b\), constrained only by the requirement that \(A\) and \(T\ulcorner A\urcorner\) agree in value. The set \(\mathcal M^n\) is the subset of such evaluations in which the \(T\)-sentences receive the values \(t\), \(f\) or \(n\), not \(b\). Our fact tells us the theories of \(\mathcal M\) and \(\mathcal M^n\) are indistinguishable. At the level of theories, we cannot distinguish between paradoxical sentences determinately receiving a gap value, and agnosticism between gaps and gluts.</p>
<p>Thankfully, we don’t need to remain at the level of <em>theories</em>. The sets \(\mathcal M\) and \(\mathcal M^n\) determine the same set of theorems, but they determine different sets of cotheorems. While they rule <em>in</em> the same sentences, they rule out different sentences. The liar sentence \(\neg T\lambda\) is <em>true</em> in some valuations in \(\mathcal M\) (those that assign it the value \(b\)) while it is true in no valuations in \(\mathcal M^n\). In all valuations in \(\mathcal M^n\) a liar sentence must receive the value \(n\), so it is true in no valuation at all. The <em>untruths</em> of \(\mathcal M\) differ from the untruths of \(\mathcal M^n\).</p>
<p>If we attend to bitheories, the symmetry between gaps and gluts is completely restored. For our facts concerning gaps, we have matching facts concerning gluts.</p>
<p><span class="caps">fact</span>: For any <span class="caps">fde</span> evaluation \(\rho\), the <span class="caps">lp</span> cotheory determined by \(\rho^{b}\) (the formulas \(U_{\rho^b}\) untrue in \(\rho^b\)) and the <span class="caps">fde</span> cotheory determined by the two evaluations \(\rho\) and \(\rho^{b}\) (the formulas \(U_{\lbrace\rho,\rho^b\rbrace}\)) are identical.</p>
<p><span class="caps">fact</span>: If \(\mathcal M\) is a set of valuations where for every \(\rho\) in \(\mathcal M\) the valuation\(\rho^b\) is also in \(\mathcal M\), then the cotheory \(U_\mathcal{M}\) of sentences untrue in all evaluations in \(\mathcal{M}\) is identical to the cotheory \(U_{\mathcal{M}^b}\), of sentences untrue in all evaluations in \(\mathcal{M}^b\).</p>
<p>Symmetry is regained, and we can distinguish between agnostalethism concerning paradoxical sentences and those views which assign them a gap, or assign them a glut. Glut views are distinguished from agnostaletheism as <em>theories</em>—they hold different things to be true, while gap views are distinguished from
agnostaletheism as <em>cotheories</em>—they hold different things to be untrue.</p>
<p>That is all well and good when it comes to formally distinguishing these three views of paradoxical sentences. However, the puzzle wasn’t just a puzzle about the formal development of these views. It is also a puzzle concerning what it is to <em>hold</em> those views, and this issue remains, even if we reject the model theory and the technical devices of theories, cotheories and bitheories.</p>
<h3 id="assertion-and-denial-in-fde-k3-and-lp">Assertion and Denial in FDE, K3 and LP</h3>
<p>To answer the puzzle in those terms, we should say something about the speech acts of assertion and denial, or the cognitive states of accepting and rejecting. These are the practical analogues of the theoretical and abstract notions of theory and cotheory. To connect talk of accepting and rejecting (or assertion and denial) with logical notions, we need some kind of bridge principle. A principle I have endorsed elsewhere (Restall <a href="http://consequently.org/writing/multipleconclusions/">2005</a>, <a href="http://consequently.org/writing/adnct">2013</a>, <a href="http://consequently.org/writing/assertiondenialparadox/">2015</a>) goes like this:</p>
<p><span class="caps">bridge principle 1</span>: If the sequent \(X\ydash Y\) is valid, then don’t accept (or assert) every member of \(X\) and reject (or deny) every member of \(Y\).</p>
<p>To constrain what you accept and reject in line with such a bridge principle is to maintain a kind of coherence in your cognitive state. Since \(A\lor B\ydash A,B\) is valid, you would not accept the disjunction \(A\lor B\) and reject both disjuncts \(A\) and \(B\). If (as <span class="caps">lp</span> would have it) \(\ydash C\lor \neg C\) is valid, you would not reject that instance of the law of the excluded middle. If (as <span class="caps">k3</span> would have it) \(D\land\neg D\ydash\) is valid, you would not accept that contradiction.</p>
<p>With this bridge principle at hand, we can distinguish the agnostaletheist (who uses a range of <span class="caps">fde\(^T\)</span> valuations to define validity), the <span class="caps">k3</span>-theorist (who restricts her attention to <span class="caps">k3\(^T\)</span> valuations) and the <span class="caps">lp</span>-theorist (who restricts his attention to <span class="caps">lp\(^T\)</span> valuations). The <span class="caps">k3</span> theorist will not accept any contradiction. Contradictions are never true in any evaluation of theirs. The <span class="caps">lp</span> theorist will never reject any excluded middle. Excluded middle disjunctions are never untrue in their evaluations. The <span class="caps">fde</span> theorist, on the other hand, can reject excluded middles and accept contradictions. That concerns <em>validity</em> and the first bridge principle, which amounts to a kind of coherence (or consistency) principle.</p>
<p>To accept a contingent <em>theory</em>, or better, the <em>bitheory</em> \(\langle T,U\rangle\) is to constrain your acceptings and rejectings further.</p>
<p><span class="caps">bridge principle 2</span>: To accept a bitheory \(\langle T,U\rangle\) is to accept each member of \(T\) and to reject each member of \(U\).</p>
<p>This constraint is compatible with the first bridge principle if pair \(\langle T,U\rangle\) is indeed a bitheory. In that case, the sequent \(T\ydash U\) is not valid (if it were, then each formula \(A\) would be in \(T\), since \(T\ydash A,U\) is valid, and in \(U\), since \(T,A\ydash U\) is valid, but that is impossible, since \(T\) and \(U\) are, by definition, disjoint), so there is no issue with accepting all of \(T\) and rejecting all of \(U\).</p>
<p>If we consider our three different views of the truth predicate (1) <span class="caps">fde\(^T\)</span> allowing both gaps and gluts, (2) <span class="caps">k3\(^T\)</span> allowing only gaps, and (3) <span class="caps">lp\(^T\)</span> allowing only gluts as determining bitheories, we can see the difference in our acceptings and rejectings if we adopt <em>bridge principle 2</em> for each bitheory in turn. If we accept <span class="caps">k3\(^T\)</span>, we reject all contradictions, even those involving the liar sentence \(T\lambda\land\neg T\lambda\). If we accept <span class="caps">lp\(^T\)</span>, we accept all excluded middles, including the excluded middle involving the liar: \(T\lambda\lor\neg T\lambda\). But the agnostalethic position, accepting <span class="caps">fde\(^T\)</span>, commits us to neither: we are free to accept
the contradiction \(T\lambda\land\neg T\lambda\) or to reject the disjunction \(T\lambda\lor\neg T\lambda\).</p>
<p>So, an agnostaletheist and a gap theorist indeed agree on what to accept, but they disagree on what is to be rejected. In a similar way, an agnostaletheist and an glut theorist agree on what to reject, but they disagree on what to accept. Keeping the symmetry between accepting and rejecting in view, we have parity between gaps and gluts, and the agnostalethic position can be distinguished from its two neighbours.</p>
<h3 id="references">References</h3>
<p>Terence Parsons (1990), “<a href="http://www.jstor.org/stable/40231701">True Contradictions</a>”
<em>Canadian Journal of Philosophy</em> 20:3, 335-353.</p>
<p>Graham Priest (2008), <a href="https://www.amazon.com/Introduction-Non-Classical-Logic-Introductions-Philosophy/dp/0521670268/consequentlyorg"><em>An Introduction to Non-Classical Logic</em>: <em>From If to Is</em></a>, Second Edition, Cambridge University Press.</p>
<p>Greg Restall (2005) “<a href="http://consequently.org/writing/multipleconclusions/">Multiple Conclusions</a>,” pages 189–205 in <em>Logic, Methodology and Philosophy of Science</em>: <em>Proceedings of the Twelfth International Congress</em>, edited by Petr Hajek, Luis Valdes-Villanueva and Dag Westerstahl, Kings’ College Publications.</p>
<p>Greg Restall (2013) “<a href="http://consequently.org/writing/adnct">Assertion, Denial and Non-Classical Theories</a>,” pp. 81–99 in <em>Paraconsistency</em>: <em>Logic and Applications</em>, edited by Koji Tanaka, Francesco Berto, Edwin Mares and Francesco Paoli.</p>
<p>Greg Restall (2015) “<a href="http://consequently.org/writing/assertiondenialparadox/">Assertion, Denial, Accepting, Rejecting, Symmetry and Paradox</a>,” pages 310-321 in Foundations of Logical Consequence, edited by Colin R. Caret and Ole T. Hjortland, Oxford University Press.</p>
<p>Richard Routley and Val Routley (1972) “<a href="http://www.jstor.org/stable/2214309">The Semantics of First Degree Entailment</a>,” <em>Noûs</em> 6:4, 335–359.</p>PHIL30043: The Power and Limits of Logic
http://consequently.org/class/2016/phil30043/
Fri, 06 Nov 2015 00:00:00 UTChttp://consequently.org/class/2016/phil30043/
<p><strong><span class="caps">PHIL30043</span>: The Power and Limits of Logic</strong> is a <a href="https://handbook.unimelb.edu.au/view/2016/PHIL30043">University of Melbourne undergraduate subject</a>. It covers the metatheory of classical first order predicate logic, beginning at the <em>Soundness</em> and <em>Completeness</em> Theorems (proved not once but <em>twice</em>, first for a tableaux proof system for predicate logic, then a Hilbert proof system), through the <em>Deduction Theorem</em>, <em>Compactness</em>, <em>Cantor’s Theorem</em>, the <em>Downward Löwenheim–Skolem Theorem</em>, <em>Recursive Functions</em>, <em>Register Machines</em>, <em>Representability</em> and ending up at <em>Gödel’s Incompleteness Theorems</em> and <em>Löb’s Theorem</em>.</p>
<figure>
<img src="http://consequently.org/images/godel.jpg" alt="Kurt Godel, seated">
<figcaption>Kurt Gödel, seated</figcaption>
</figure>
<p>The subject is taught to University of Melbourne undergraduate students (for Arts students as a part of the Philosophy major, for non-Arts students, as a breadth subject). Details for enrolment are <a href="https://handbook.unimelb.edu.au/view/2016/PHIL30043">here</a>. I make use of video lectures I have made <a href="http://vimeo.com/album/2262409">freely available on Vimeo</a>.</p>
<h3 id="outline">Outline</h3>
<p>The course is divided into four major sections and a short prelude. Here is a list of all of the videos, in case you’d like to follow along with the content.</p>
<h4 id="prelude">Prelude</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/59401942">Logical Equivalence</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59403292">Disjunctive Normal Form</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59403535">Why DNF Works</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59463569">Prenex Normal Form</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59466141">Models for Predicate Logic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/59880539">Trees for Predicate Logic</a></li>
</ul>
<h4 id="completeness">Completeness</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/59883806">Introducing Soundness and Completeness</a></li>
<li><a href="http://vimeo.com/album/2262409/video/60249309">Soundness for Tree Proofs</a></li>
<li><a href="http://vimeo.com/album/2262409/video/60250515">Completeness for Tree Proofs</a></li>
<li><a href="http://vimeo.com/album/2262409/video/61677028">Hilbert Proofs for Propositional Logic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/61685762">Conditional Proof</a></li>
<li><a href="http://vimeo.com/album/2262409/video/62221512">Hilbert Proofs for Predicate Logic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/103720089">Theories</a></li>
<li><a href="http://vimeo.com/album/2262409/video/103757399">Soundness and Completeness for Hilbert Proofs for Predicate Logic</a></li>
</ul>
<h4 id="compactness">Compactness</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/63454250">Counting Sets</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63454732">Diagonalisation</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63454732">Compactness</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63455121">Non-Standard Models</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63462354">Inexpressibility of Finitude</a></li>
<li><a href="http://vimeo.com/album/2262409/video/63462519">Downward Löwenheim–Skolem Theorem</a></li>
</ul>
<h4 id="computability">Computability</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/64162062">Functions</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64167354">Register Machines</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64207986">Recursive Functions</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64435763">Register Machine computable functions are Recursive</a></li>
<li><a href="http://vimeo.com/album/2262409/video/64604717">The Uncomputable</a></li>
</ul>
<h4 id="undecidability-and-incompleteness">Undecidability and Incompleteness</h4>
<ul>
<li><a href="http://vimeo.com/album/2262409/video/65382456">Deductively Defined Theories</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65392670">The Finite Model Property</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65393543">Completeness</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65440901">Introducing Robinson’s Arithmetic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65442289">Induction and Peano Arithmetic</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65443650">Representing Functions and Sets</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65483655">Gödel Numbering and Diagonalisation</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65497886">Q (and any consistent extension of Q) is undecidable, and incomplete if it’s deductively defined</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65498016">First Order Predicate Logic is Undecidable</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65501745">True Arithmetic is not Deductively Defined</a></li>
<li><a href="http://vimeo.com/album/2262409/video/65505372">If Con(PA) then PA doesn’t prove Con(PA)</a></li>
</ul>
PHIL20030: Meaning, Possibility and Paradox
http://consequently.org/class/2016/phil20030/
Fri, 06 Nov 2015 00:00:00 UTChttp://consequently.org/class/2016/phil20030/
<p><strong><span class="caps">PHIL20030</span>: Meaning, Possibility and Paradox</strong> is a <a href="http://unimelb.edu.au">University of Melbourne</a> undergraduate subject. The idea that the meaning of a sentence depends on the meanings of its parts is fundamental to the way we understand logic, language and the mind. In this subject, we look at the different ways that this idea has been applied in logic throughout the 20th Century and into the present day.</p>
<p>In the first part of the subject, our focus is on the concepts of necessity and possibility, and the way that ‘possible worlds semantics’ has been used in theories of meaning. We will focus on the logic of necessity and possibility (modal logic), times (temporal logic), conditionality and dependence (counterfactuals), and the notions of analyticity and a priority so important to much of philosophy.</p>
<p>In the second part of the subject, we examine closely the assumption that every statement we make is either true or false but not both. We will examine the paradoxes of truth (like the so-called ‘liar paradox’) and vagueness (the ‘sorites paradox’), and we will investigate different ways attempts at resolving these paradoxes by going beyond our traditional views of truth (using ‘many valued logics’) or by defending the traditional perspective.</p>
<p>The subject serves as an introduction to ways that logic is applied in the study of language, epistemology and metaphysics, so it is useful to those who already know some philosophy and would like to see how logic relates to those issues. It is also useful to those who already know some logic and would like to learn new logical techniques and see how these techniques can be applied.</p>
<p>The subject is offered to University of Melbourne undergraduate students (for Arts students as a part of the Philosophy major, for non-Arts students, as a breadth subject). Details for enrolment are <a href="https://handbook.unimelb.edu.au/view/2016/PHIL20030">here</a>.</p>
<figure>
<img src="http://consequently.org/images/peter-rozsa-small.png" alt="Rosza Peter">
<figcaption>The writing down of a formula is an expression of our joy that we can answer all these questions by means of one argument. — Rózsa Péter, Playing with Infinity</figcaption>
</figure>
<p>I make use of video lectures I have made <a href="http://vimeo.com/album/2470375">freely available on Vimeo</a>. If you’re interested in this sort of thing, I hope they’re useful. Of course, I appreciate any constructive feedback you might have.</p>
<h3 id="outline">Outline</h3>
<p>The course is divided into four major sections and a short prelude. Here is a list of all of the videos, in case you’d like to follow along with the content.</p>
<h4 id="classical-logic">Classical Logic</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/71195118">On Logic and Philosophy</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71196826">Classical Logic—Models</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71200032">Classical Logic—Tree Proofs</a></li>
</ul>
<h4 id="meaning-sense-reference">Meaning, Sense, Reference</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/71206884">Reference and Compositionality</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71226471">Sense and Reference</a></li>
</ul>
<h4 id="basic-modal-logic">Basic Modal Logic</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/71556216">Introducing Possibility an Necessity</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71558401">Models for Basic Modal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71558696">Tree Proofs for Basic Modal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/71560394">Soundness and Completeness for Basic Modal Logic</a></li>
</ul>
<h4 id="normal-modal-logics">Normal Modal Logics</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/72135540">What Are Possible Worlds?</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72137443">Conditions on Accessibility Relations</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72137856">Equivalence Relations, Universal Relations and S5</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72139085">Tree Proofs for Normal Modal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72140275">Applying Modal Logics</a></li>
</ul>
<h4 id="double-indexing">Double Indexing</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/72140275">Temporal Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72143616">Actuality and the Present</a></li>
<li><a href="https://vimeo.com/album/2470375/video/72266887">Two Dimensional Modal Logic</a></li>
</ul>
<h4 id="conditionality">Conditionality</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/74494229">Strict Conditionals</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74498276"><em>Ceteris Paribus</em> Conditionals</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74504639">Similarity</a></li>
</ul>
<h4 id="three-values">Three Values</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/74628150">More than Two Truth Values</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74636384">K3</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74680756">Ł3</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74680954">LP</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74682689">RM3</a></li>
</ul>
<h4 id="four-values">Four Values</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/74685077">FDE: Relational Evaluations</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74685986">FDE: Tree Proofs</a></li>
<li><a href="https://vimeo.com/album/2470375/video/74695340">FDE: Routley Evaluations</a></li>
</ul>
<h4 id="paradoxes">Paradoxes</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/76045884">Truth and the Liar Paradox</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76049193">Fixed Point Construction</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76055233">Curry’s Paradox</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76057722">The Sorites Paradox</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76061452">Fuzzy Logic</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76066245">Supervaluationism</a></li>
<li><a href="https://vimeo.com/album/2470375/video/76070423">Epistemicism</a></li>
</ul>
<h4 id="what-to-do-with-so-many-logical-systems">What to do with so many logical systems</h4>
<ul>
<li><a href="https://vimeo.com/album/2470375/video/76070953">Logical Monism and Pluralism</a></li>
</ul>
Proof Theory: Logical and Philosophical Aspects
http://consequently.org/class/2016/ptpla-nasslli/
Fri, 06 Nov 2015 00:00:00 UTChttp://consequently.org/class/2016/ptpla-nasslli/
<p>This is an intensive class on logical and philosophical issues in proof theory, taught by <a href="http://www.standefer.net">Shawn Standefer</a> and me at <a href="http://ruccs.rutgers.edu/nasslli2016"><span class="caps">nasslli</span> 2016</a>. We cover cut elimination, some substructural logics, and hypersequents, with a bit of inferentialism and bilateralism mixed in. Here are the slides for each day of the course.</p>
<ul>
<li><strong>Day 1</strong>: <em><a href="http://consequently.org/slides/nassli2016-pt-lpa-1-foundations.pdf">Foundations</a></em>. An introduction to the sequent calculus for intuitionistic and classical logic, cut elimination and some of its consequences.</li>
<li><strong>Day 2</strong>: <em><a href="http://consequently.org/slides/nassli2016-pt-lpa-2-substructural-logics.pdf">Substructural Logics</a></em>. Structural rules in sequent systems; the case of distribution; different substructural logics and their applications; revisiting cut elimination in substructural logics.</li>
<li><strong>Day 3</strong>: <em><a href="http://consequently.org/slides/nassli2016-pt-lpa-3-beyond-sequents.pdf">Beyond Sequents</a></em>. Sequent sytstems for basic modal logics; three ways to move beyond traditional sequent systems—display logic, labelled sequents and tree hypersequents.</li>
<li><strong>Day 4</strong>: <em><a href="http://consequently.org/slides/nassli2016-pt-lpa-4-hypersequents-for-modal-logics.pdf">Hypersequents for S5, Actuality and 2D Modal Logics</a></em>. From tree hypersequents to simple hypersequents for S5; extending simple hypersequents to model actuality and two-dimensional modal logic.</li>
<li><strong>Day 5</strong>: <em><a href="http://consequently.org/slides/nassli2016-pt-lpa-5-semantics.pdf">Semantics</a></em>. Normative pragmatics; the scope of rules and definitions; between proofs and models; moving beyond propositional logics.</li>
</ul>
<h3 id="readings-and-references">Readings and References</h3>
<h4 id="foundations">Foundations</h4>
<ul>
<li>Gerhard Gentzen, “<a href="http://link.springer.com/article/10.1007%2FBF01201353">Untersuchungen über das logische Schließen—I</a>”, <em>Mathematische Zeitschrift</em>, 39(1):176–210, 1935.</li>
<li>Gerhard Gentzen, <em><a href="https://www.amazon.com/Collected-Papers-Study-Foundation-Mathematics/dp/072042254X/consequentlyorg">The Collected Papers of Gerhard Gentzen</a></em>, Translated and Edited by M. E. Szabo, North Holland, 1969.</li>
<li>Albert Grigorevich Dragalin, <a href="https://www.amazon.com/Mathematical-Intuitionism-Introduction-Translations-Monographs/dp/0821845209/consequentlyorg"><em>Mathematical Intuitionism</em>: <em>Introduction to Proof Theory</em></a>, American Mathematical Society, Translations of Mathematical Monographs, 1987.</li>
<li>Roy Dyckhoff, “<a href="http://www.jstor.org/stable/2275431">Contraction-Free Sequent Calculi for Intuitionistic Logic</a>”, <em>Journal of Symbolic Logic</em>, 57:795–807, 1992.</li>
<li>Sara Negri and Jan von Plato, <em><a href="https://www.amazon.com/Structural-Proof-Theory-Professor-Negri/dp/0521793076/consequentlyorg">Structural Proof Theory</a></em>, Cambridge University Press, 2002.</li>
<li>Katalin Bimbó, <a href="https://www.amazon.com/Proof-Theory-Formalisms-Mathematics-Applications/dp/1466564660/consequentlyorg"><em>Proof Theory</em>: <em>Sequent Calculi and Related Formalisms</em></a>, CRC Press, Boca Raton, FL, 2015</li>
<li>Peter Milne, “<a href="http://www.jstor.org/stable/27903796">Harmony, Purity, Simplicity and a ‘Seemingly Magical Fact’</a>”, <em>The Monist</em>, 85(4):498–534, 2002</li>
</ul>
<h4 id="substructural-logics">Substructural Logics</h4>
<ul>
<li>Greg Restall, <em><a href="http://consequently.org/writing/isl/">An Introduction to Substructural Logics</a></em>, Routledge 2000.</li>
<li>Francesco Paoli, <a href="https://www.amazon.com/Substructural-Logics-Primer-F-Paoli/dp/9048160146"><em>Substructural Logics</em>: <em>A Primer</em></a>, Springer 2002.</li>
<li>Greg Restall,
“<a href="http://consequently.org/writing/HPPLrssl/">Relevant and Substructural Logics</a>”, pp. 289–396 in <em>Logic and the Modalities in the Twentieth Century</em>, Dov Gabbay and John Woods (editors), Elsevier 2006.</li>
<li>Alan Ross Anderson and Nuel D. Belnap,
<em>Entailment</em>: <em>The Logic of Relevance and Necessity</em>, Volume 1, Princeton University Press, 1975</li>
<li>Alan Ross Anderson, Nuel D. Belnap and J. Michael Dunn,
<em>Entailment</em>: <em>The Logic of Relevance and Necessity</em>, Volume 2, Princeton University Press, 1992</li>
<li>Edwin D. Mares, <a href="https://www.amazon.com/Relevant-Logic-Interpretation-Edwin-Mares/dp/0521039258/consequentlyorg"><em>Relevant Logic</em>: <em>A Philosophical Interpretation</em></a>, Cambridge University Press, 2004</li>
<li>J. Michael Dunn and Greg Restall,
“<a href="http://consequently.org/writing/rle/">Relevance Logic</a>,” pp. 1–136 in <em>The Handbook of Philosophical Logic</em>, vol. 6, edition 2, Dov Gabbay and Franz Guenther (editors)</li>
<li>Jean-Yves Girard, “<a href="http://iml.univ-mrs.fr/~girard/linear.pdf">Linear Logic</a>,” <em>Theoretical Computer Science</em>, 50:1–101, 1987</li>
<li>Jean-Yves Girard, Yves Lafont and Paul Taylor, <em><a href="http://www.paultaylor.eu/stable/Proofs+Types.html">Proofs and Types</a></em>, Cambridge University Press, 1989</li>
<li>Joachim Lambek, “<a href="http://www.jstor.org/stable/2310058">The Mathematics of Sentence Structure</a>,” <em>American Mathematical Monthly</em>, 65(3):154–170, 1958</li>
<li>Glyn Morrill, <a href="https://www.amazon.com/Type-Logical-Grammar-Categorial-Logic/dp/0792332261/consequentlyorg"><em>Type Logical Grammar</em>: <em>Categorial Logic of Signs</em></a>, Kluwer, 1994</li>
<li>Johan van Benthem, <em><a href="https://www.amazon.com/Language-Action-130-Foundations-Mathematics/dp/0444890009/consequentlyorg">Language in Action</a></em>, North-Holland and MIT Press, 1995.</li>
<li>Richard Moot and Christian Retoré, <em><a href="https://www.amazon.com/Logic-Categorial-Grammars-deductive-semantics/dp/3642315542/consequentlyorg">The Logic of Categorial Grammars</a></em>, Springer 2012.</li>
<li>Chris Barker, “<a href="http://semprag.org/article/view/sp.3.10">Free Choice Permission as Resource-Sensitive Reasoning</a>,” <em>Semantics and Pragmatics</em>, 3:10, 2010, 1-38.</li>
</ul>
<h4 id="beyond-sequents">Beyond Sequents</h4>
<ul>
<li>Nuel D. Belnap, “<a href="http://www.pitt.edu/~belnap/87displaylogic.pdf">Display Logic</a>,” <em>Journal of Philosophical Logic</em>, 11:375–417, 1982.</li>
<li>Heinrich Wansing, <em><a href="https://www.amazon.com/Displaying-Modal-Logic-Trends/dp/9048150795/consequentlyorg">Displaying Modal Logic</a></em>, Kluwer Academic Publishers, 1998.</li>
<li>Sara Negri, “<a href="http://www.jstor.org/stable/30226848">Proof Analysis in Modal Logic</a>,” <em>Journal of Philosophical Logic</em>, 34:507–544, 2005.</li>
<li>Arnon Avron, “<a href="http://link.springer.com/article/10.1007/BF01531058">Hypersequents, logical consequence and intermediate logics for concurrency</a>,” <em>Annals of Mathematics and Artificial intelligence</em>, 4:225–248, 1991.</li>
<li>Arnon Avron, “<a href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=23698822FD590FB4FAAF5D33AA1BC365?doi=10.1.1.39.9225&rep=rep1&type=pdf">The method of hypersequents in the proof theory of propositional non-classical logics</a>” in <em>Logic</em>: <em>From Foundations to Applications</em>, W. Hodges, et al., 1996, Oxford Science Publication, Oxford, pp. 1–32</li>
<li>Francesca Poggiolesi, <em><a href="http://www.amazon.com/Gentzen-Calculi-Modal-Propositional-Author/dp/B010BCHPNQ/consequentlyorg">Gentzen Calculi for Modal Propositional Logic</a></em>, Springer, 2011.</li>
<li>Francesca Poggiolesi and Greg Restall, “<a href="http://consequently.org/writing/interp-apply-ptml">Interpreting and Applying Proof Theory for Modal Logic</a>,” <em>New Waves in Philosophical Logic</em>, ed. Greg Restall and Gillian Russell, Palgrave MacMillan, 2012.</li>
</ul>
<h4 id="hypersequents-for-s5-actuality-and-2d-modal-logics">Hypersequents for S5, Actuality and 2D Modal Logics</h4>
<ul>
<li>Francesca Poggiolesi, “<a href="http://dx.doi.org/10.1017/S1755020308080040">A Cut-Free Simple Sequent Calculus for Modal Logic S5</a>,” <em>Review of Symbolic Logic</em> 1:1, 3–15, 2008.</li>
<li>Greg Restall, “<a href="http://consequently.org/writing/s5nets">Proofnets for S5: sequents and circuits for modal logic</a>,”
<em>Logic Colloquium 2005</em>, ed. C. Dimitracopoulos, L. Newelski, D. Normann and J. R. Steel, Cambridge University Press, 2008.</li>
<li>Kaja Bednarska and Andrzej Indrzejczak, “<a href="http://dx.doi.org/10.12775/LLP.2015.018x">Hypersequent Calculi for S5: the methods of cut elimination</a>” <em>Logic and Logical Philosophy</em>, 24, 277–311, 2015.</li>
<li>Greg Restall, “<a href="http://consequently.org/writing/cfss2dml">A Cut-Free Sequent System for Two Dimensional Modal Logic, and why it matters</a>,” <em>Annals of Pure and Applied Logic</em>, 163:11, 1611–1623, 2012.</li>
<li>Martin Davies, “<a href="http://www.jstor.org/stable/4321462">Reference, Contingency, and the Two-Dimensional Framework</a>,” <em>Philosophical Studies</em>, 118(1):83–131, 2004.</li>
<li>Martin Davies and Lloyd Humberstone, “<a href="http://www.jstor.org/stable/4319391">Two Notions of Necessity</a>,” <em>Philosophical Studies</em>, 38(1):1–30, 1980.</li>
<li>Lloyd Humberstone, “<a href="http://www.jstor.org/stable/4321460">Two-Dimensional Adventures</a>,” <em>Philosohical Studies</em>, 118(1):257–277, 2004.</li>
</ul>
<h4 id="semantics">Semantics</h4>
<ul>
<li>Arthur Prior, “<a href="http://cas.uchicago.edu/workshops/wittgenstein/files/2009/04/prior.pdf">The Runabout Inference Ticket</a>.” Analaysis 21(2): 38–39, 1960.</li>
<li>Nuel Belnap, “<a href="http://www.jstor.org/stable/3326862">Tonk, Plonk and Plink</a>,” <em>Analysis</em>, 22(6): 130–134, 1962.</li>
<li>Nuel Nelnap “<a href="https://www.researchgate.net/profile/Nuel_Belnap/publication/225896511_Declaratives_are_not_enough/links/53f645c90cf2fceacc7128ff.pdf">Declaratives Are Not Enough</a>.” <em>Philosophical Studies</em>, 59(1): 1–30, 1990.</li>
<li>Jaroslav Peregrin “<a href="https://jarda.peregrin.cz/mybibl/PDFTxt/526.pdf">An Inferentialist Approach to Semantics: Time for a New Kind of Structuralism?</a>” <em>Philosophy Compass</em>, 3(6): 1208–1223, 2008.</li>
<li>Greg Restall, “<a href="http://consequently.org/writing/multipleconclusions/">Multiple Conclusions</a>” pp. 189–205 in <em>Logic, Methodology and Philosophy of Science</em>: <em>Proceedings of the Twelfth International Congress</em>, edited by P. Hájek, L. Valdés-Villanueva and D. Westerståhl, KCL Publications, 2005.</li>
<li>Greg Restall, “<a href="http://consequently.org/writing/tvpt">Truth Values and Proof Theory</a>” <em>Studia Logica</em>, 92(2):241–264, 2009.</li>
<li>Raymond Smullyan, <em>First-Order Logic</em>. Springer-Verlag, 1968.</li>
<li>Gaisi Takeuti, <em>Proof Theory</em>, 2nd edition. Elsevier, 1987.</li>
<li>Michael Kremer “<a href="http://www.jstor.org/stable/30226394">Kripke and the Logic of Truth</a>.” <em>Journal of Philosophical Logic</em>, 17:225–278, 1988</li>
<li>Sara Negri and Jan von Plato, <em><a href="https://www.amazon.com/Structural-Proof-Theory-Professor-Negri/dp/0521793076/consequentlyorg">Structural Proof Theory</a></em>, Cambridge University Press, 2002.</li>
<li>Greg Testall “<a href="http://consequently.org/writing/adnct/">Assertion, Denial and Non-Classical Theories</a>.” In <em><a href="http://link.springer.com/book/10.1007/978-94-007-4438-7">Paraconsistency: Logic and Applications</a></em>, edited by Koji Tanaka, Francesco Berto, Edwin Mares and Francesco Paoli, pp. 81–99, 2013</li>
<li>Anne Troelstra and Helmut Schwichtenberg, <em><a href="https://www.amazon.com/Anne-S-Troelstra-Paperback-Revised/dp/B01FOD9MFG/consequentlyorg">Basic Proof Theory</a></em>, 2nd ed. Cambridge University Press, 2000.</li>
</ul>
<h3 id="links">Links</h3>
<ul>
<li><a href="http://ruccs.rutgers.edu/nasslli2016"><span class="caps">nasslli</span> 2016</a>, Rutgers, July 2016.</li>
<li><a href="http://consequently.org/handouts/PTLPA-NASSLLI-2016-proposal.pdf">Class Proposal</a>, containing our draft class outline.</li>
</ul>
Proofs and what they’re good for
http://consequently.org/presentation/2016/proofs-and-what-theyre-good-for-melb/
Wed, 20 Apr 2016 00:00:00 UTChttp://consequently.org/presentation/2016/proofs-and-what-theyre-good-for-melb/<p>I’m giving a talk entitled “Proofs and what they’re good for” at the <a href="http://aap.org.au/conference">2016 Australasian Association for Philosophy Conference</a> on Monday, July 3, 2016.</p>
<p>Abstract: I present a new account of the nature of proof, with the aim of explaining how proof could actually play the role in reasoning that it does, and answering some long-standing puzzles about the nature of proof, including (1) how it is that a proof transmits warrant (2) Lewis Carroll’s dilemma concerning Achilles and the Tortoise and the coherence of questioning basic proof rules like modus ponens, and (3) how we can avoid logical omniscience without committing ourselves to inconsistency.</p>
<ul>
<li>The <a href="http://consequently.org/slides/proofs-and-what-theyre-good-for-slides-AAP.pdf">slides</a> and <a href="http://consequently.org/handouts/proofs-and-what-theyre-good-for-handout-AAP.pdf">handout</a> are available.</li>
</ul>
Terms for Classical Sequents: Proof Invariants and Strong Normalisation
http://consequently.org/presentation/2016/terms-for-classical-sequents-aal-2016/
Wed, 20 Apr 2016 00:00:00 UTChttp://consequently.org/presentation/2016/terms-for-classical-sequents-aal-2016/<p>I’m giving a talk entitled “Terms for Classical Sequents: Proof Invariants and Strong Normalisation” at the <a href="https://blogs.unimelb.edu.au/logic/aal-2016/">2016 Australasian Association for Logic Conference</a>.</p>
<p>Abstract: A proof for a sequent \(\Sigma\vdash\Delta\) shows you how to get from the premises \(\Sigma\) to the conclusion \(\Delta\). It seems very plausible that some valid sequents have <em>different</em> proofs. It also seems plausible that some different derivations for the one sequent don’t represent different proofs, but are merely different ways to present the <em>same</em> proof. These two plausible ideas are hard to make precise, especially in the case of classical logic.</p>
<p>In this paper, I give a new account of a kind of invariant for derivations in the classical sequent calculus, and show how it can formalise a notion of proof identity with pleasing behaviour. In particular, it has a confluent, strongly normalising cut elimination procedure.</p>
<ul>
<li>The slides for the talk are <a href="http://consequently.org/slides/proof-terms-aal-2016.pdf">available here</a>.</li>
</ul>