Apriority for Empiricists Making Sense of Truth by Convention by Brett Topey A.B., Princeton University, 2007 Dissertation Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Philosophy at Brown University Providence, Rhode Island May 2018 Copyright © 2018 by Brett Topey This dissertation by Brett Topey is accepted in its present form by the Department of Philosophy as satisfying the dissertation requirement for the degree of Doctor of Philosophy. Date Christopher Hill, Advisor Recommended to the Graduate Council Date David Christensen, Reader Date Richard Kimberly Heck, Reader Date Paul Horwich, Reader Date Joshua Schechter, Reader Approved by the Graduate Council Date Andrew G. Campbell, Dean of the Graduate School iii Curriculum Vitae Brett Topey received his elementary and middle school education in Louisiana, in St. Charles Parish Public Schools. He attended Jesuit High School of New Orleans, graduating in 2003. He then attended Princeton University, where he received an A.B. with Honors in Philos- ophy in 2007. His undergraduate thesis, “On the New Riddle of Induction”, was awarded the Department of Philosophy’s Dickinson Prize for theses in logic or theory of knowledge. He also received a Certificate of Proficiency from Princeton’s Program in Visual Arts, and his creative thesis, the short film “The Exceptional Case of Victor Moreau”, was awarded the Francis LeMoyne Page Visual Arts Award. In 2010 he enrolled in the Ph.D. program in the Department of Philosophy at Brown University. His publications include “Coin Flips, Credences and the Reflection Principle” (2012) in Analysis, “Quinean Holism, Analyticity, and Diachronic Rational Norms” (forthcoming) in Synthese, and “Linguistic Convention and Worldly Fact: Prospects for a Naturalist Theory of the A Priori” (forthcoming) in Philosophical Studies. He taught at Bridgewater State Univer- sity in 2016 and 2017, and he is currently a University Research Fellow in the Department of Philosophy at Lehigh University. iv Preface and Acknowledgments Writing a dissertation can be a lonely enterprise, but I certainly didn’t write this one alone. There are quite a few people without whom it either wouldn’t exist at all or would be much worse than it is, and I’d like to express my gratitude. Thanks, first, to my advisor, Chris Hill, who has somehow managed to know exactly what kind of support I needed at all times, and to the other members of my dissertation committee, Josh Schechter, Riki Heck, David Christensen, and Paul Horwich. I couldn’t have asked for a better committee—all of them, in addition to possessing a level of philosophical skill and knowledge that is, quite frankly, intimidating, have been remarkably responsive, encouraging, and generous with their time, and the dissertation is far better for their input. I’d also like to thank the faculty members who weren’t on my committee but whose help has nevertheless resulted in an improved dissertation, especially Jamie Dreier, Nina Emery, and Doug Kutach. Thanks, as well, to my undergraduate teachers, especially Karen Bennett, who intro- duced me to the issues my continuing interest in which eventually led to the line of inquiry I’m pursuing here, and Adam Elga, whose words of encouragement back then had a more significant effect than he knows. And thanks to the philosophy faculty at Lehigh University, especially Gordon Bearn, Ricki Bliss, and Patrick Connolly—they’ve made me feel at home, and as a result, finalizing the dissertation hasn’t been as stressful as it might have been. I’m also grateful to a number of my fellow graduate students, both for the assistance they’ve provided via philosophical discussion and for their company—writing a dissertation, as I’ve suggested, can be incredibly isolating, but they’ve made it feel less so. Phil Galligan v and Miquel Miralbés del Pino have been particularly helpful—exchanging ideas with them over the past several years has been both edifying and fun. Thanks also to Alex King, Nic Bommarito, Sean Aas, Dana Howard, Steven Yamamoto, and Derek Bowman, who were a few years ahead of me in the Ph.D. program and who showed me how to be a graduate student, and to Tom Fisher, Toby Fuchs, Iain Laidley, Zach Barnett, Leo Yan, Han Li, Mary Renaud, Geoff Grossman, Yongming Han, Rachel Leadon, Richard Stillman, Louis Gularte, Emily Hodges, and Kirun Sankaran. I also want to thank my family, Brannon, Mom, Dad, Steven, and Kathy, for their love and support. The knowledge that they’ll be in my corner no matter what happens means more to me than I can say. And thanks, finally, to Devon, for everything: for listening, for reading, for questioning, for believing, for sharing the highs, for making the lows feel less low, for reminding me that there’s more to life than academic philosophy, for being who she is. vi Contents Curriculum Vitae iv Preface and Acknowledgments v 1 Introduction: Why Conventionalism? 1 1 The question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Possible approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3 What needs doing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Linguistic Convention and Worldly Fact: Prospects for Truth by Convention 23 1 Conventionalism and its discontents . . . . . . . . . . . . . . . . . . . . . . . . 23 2 What conventionalism is and what it needs to be . . . . . . . . . . . . . . . . . 30 3 Reconstructing the objection from worldly fact . . . . . . . . . . . . . . . . . . 40 4 Avenues of resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5 Two objections: Idealism and contingency . . . . . . . . . . . . . . . . . . . . 53 Appendix to Chapter 2: The Logic of the In-Virtue-Of Relations 62 a1 The relations in question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 a2 Formal characterizations of the in-virtue-of relations . . . . . . . . . . . . . . 63 a3 Proving our claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3 How to Be a Conventionalist: Inference, Truth, and Error 78 1 Finding a metasemantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 vii 2 Conventionalists’ inferential role semantics . . . . . . . . . . . . . . . . . . . . 80 3 Conventionalists’ deflationism . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4 Inferentialists’ conventionalism . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4 Whence Admissibility Constraints? Tolerance and Apriority 104 1 Conventionalism without apriority? A threat to the project . . . . . . . . . . . 104 2 A note on truth preservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 3 Rules and roles: Followability conflicts . . . . . . . . . . . . . . . . . . . . . . . 114 4 Proof-theoretic constraints on justification . . . . . . . . . . . . . . . . . . . . 126 5 Implications for our vindicatory project . . . . . . . . . . . . . . . . . . . . . . 156 6 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 References 163 viii If intuition has to conform to the constitution of the objects, then I do not see how we can know anything of them a priori; but if the object (as an object of the senses) conforms to the constitution of our faculty of intuition, then I can very well represent this possibility to myself. —Immanuel Kant How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth? —Sherlock Holmes Chapter 1 Introduction Why Conventionalism? Now, brought to this conclusion in so unequivocal a manner as we are, it is not our part, as reasoners, to reject it on account of apparent impossibilities. It is only left for us to prove that these apparent “impossibilities” are, in reality, not such. —C. Auguste Dupin 1 The question The phenomenon with which I’ll be concerned here is the apparent apriority of various of our beliefs about the world. We take ourselves, for instance, to be reasonable in being certain (or very nearly so) that all vixens are foxes, that 1 + 1 = 2, and that it either is or is not the case that all ravens are black, despite there being no obvious empirical basis for so high a degree of confidence about any of these matters. How (if at all) are beliefs such as these to be vindicated? This, broadly, is the question I aim to answer. This question has been left less than determinate in two respects. First, I haven’t fully specified its scope—I’ve haven’t yet made clear just which of our apparently a priori beliefs are up for discussion. The three above examples, though, are emblematic: one is a proposi- tion of basic logic, and the others are a simple mathematical proposition and a proposition that’s analytic in Frege’s sense (i.e., in the sense that it’s transformable into a logical truth by the substitution of synonyms for synonyms). These sorts of beliefs—logical, mathematical, Frege-analytic—will be central in my discussion, though my hope is that my conclusions 1 will be generalizable to other cases of apparent apriority. The reason for my focus on such beliefs is simply that they’re a natural fit for the ap- proach whose tenability I aim to establish: that of vindicating our beliefs by appeal to facts about linguistic meaning, of providing something like an analytic theory of the a priori. The core cases here are Frege-analytic propositions—it’s overwhelmingly tempting to suppose (for instance) that our recognition of the synonymy of vixen and female fox has a sig- nificant role to play in vindicating the belief that all vixens are foxes.1 But the synonymy of these two expressions can’t be the whole story—our recognition of that synonymy can fully vindicate the belief that all vixens are foxes only if the belief that all female foxes are foxes is independently in good standing. And this latter is, of course, a logical belief. Any adequate treatment of Frege-analytic propositions, then, will also include some discussion of logical propositions.2 As for mathematical propositions: I include them simply because they, un- like logical and Frege-analytic propositions, are existence-entailing and so provide a useful demonstration of how far my favored approach can take us—after all, the apparent apriority of existence-entailing beliefs is sometimes thought to be especially mysterious. The second dimension of indeterminacy in my central question concerns just what a full vindication of the beliefs in question would involve. It’s evident, of course, that at least part of what needs to be shown is that these beliefs are in good epistemic standing: the phenomenon to be explained is, again, the apparent apriority of these beliefs. But there are a few ways of being in good epistemic standing—the beliefs in question may be blameless (i.e., we may be entitled to them, in the sense that we’re not being irresponsible in having them), or they may be justified, or they may be warranted (i.e., they may have whatever property makes the difference between knowledge and mere true belief). So: which of these statuses would these beliefs need to be shown to have in order for us to count them as fully vindicated? 1 Note the typesetting here—throughout this text I’ll use small caps as a way of indicating that what’s being denoted is a linguistic expression. 2 Another way of making this point is just to note that logical truths are themselves Frege-analytic: a logical truth can be transformed into a logical truth (namely, itself) by a null substitution. Any successful vindication of our Frege-analytic beliefs, then, will certainly vindicate our logical beliefs. 2 In my view, it’s extremely plausible that the beliefs in question are in good epistemic standing in all of the above senses. These beliefs, after all, appear to be among our most secure—if any of our beliefs are justified (or warranted or blameless), our basic logical, math- ematical, and Frege-analytic beliefs are. And if that’s right, then a full vindication of these beliefs would involve showing that they have all of the aforementioned properties. But even this, I take it, wouldn’t be enough. Good epistemic standing isn’t all we’re in- terested in—the truth of the relevant beliefs is also crucial. A satisfying vindication of these beliefs, then, would involve saying something about how we manage to get at the truth about the matters in question. This last point needs to be clarified. Showing that the relevant beliefs are in good epis- temic standing would, in a way, be enough to show that we can get at the truth—for a thinker’s belief to be in good epistemic standing, after all, is just for the thinker to be rea- sonable (in some sense) in taking that belief to be true. But this, it seems to me, isn’t good enough, for the following reason: on certain accounts of what it takes for a belief to be jus- tified (or warranted or blameless), there’s no explanatory connection between the truth of a belief and its good epistemic standing.3 And if any such account is correct, it may turn out that, though we’re able, by showing the relevant beliefs to be in good epistemic standing, to demonstrate that we can get at the truth about the matters in question, we remain com- pletely unable to explain how we manage to do so. That is, we may end up insisting that we can reliably form true logical, mathematical, and Frege-analytic beliefs while acknowledg- ing in the same breath that no explanation is available of our reliability, in which case we’ll be under some pressure to admit that it’s just a coincidence that our methods of belief for- mation happen to generate true beliefs in the relevant domains. To my mind, any purported vindication of these beliefs would, if it left us in such a position, ring hollow—it would be incomplete at best. So we need to ensure we aren’t left in such a position.4 This is why I say 3 I have in mind here the broadly pragmatic approach shared by Paul Boghossian (2003a, 2003b), Crispin Wright (2004), and David Enoch and Joshua Schechter (2008), though there certainly are other accounts of good epistemic standing that fit this description. 4 Notice: I’m not claiming that we need to be able to account for our reliability in order for our beliefs to 3 that a satisfying vindication would involve giving some explanation of our reliability about the matters in question. The project to be completed, then, is this: it must be shown that our logical, mathematical, and Frege-analytic beliefs are in good epistemic standing—i.e., that they’re blameless, justi- fied, and warranted—and, in addition, an explanation must be given of just how we manage to reliably form true beliefs in these domains. Only when both tasks have been carried out will these apparently a priori beliefs be fully vindicated.5 Now, as I’ve suggested, my aim here is to show that we can do all this vindicatory work just by appealing to facts about language: it’s my view that, in order to demonstrate the good epistemic standing of our belief that (for instance) all vixens are foxes and to explain how we manage to reliably form true beliefs of this sort, all that’s required is an appeal to our competence in using certain linguistic expressions—in this case, the sentence “All vixens are foxes”. Indeed, I aim to establish the tenability of a particularly radical version of this view, one according to which the relevant beliefs have their truth values entirely in virtue of our conventions of linguistic use. (That is, they’re true by convention.) To indulge in a (perhaps unnecessarily) provocative description: the idea, roughly, is to vindicate the beliefs in question by showing that their truth is something we decide rather than something we must discover. This sort of conventionalist strategy was, in the early-twentieth-century heyday of the logical empiricists, the dominant approach to vindicating our apparently a priori beliefs, but it has over the past 80 years been the target of a series of influential attacks and, as a result, is nowadays widely considered to be hopeless. My task here, then, is primarily defensive— be in good epistemic standing. Though I think this claim is true, I’m not relying on it here. My point is only that, if we can’t explain how we manage to be reliable, we’re left in an uncomfortable position. And this seems obviously true, regardless of whether skepticism follows. As Schechter (2010: 448) puts it, “Even if there is no room to doubt our reliability, that we are reliable is a striking fact”, one that “calls out for explanation”—any reason to think this fact can’t be explained “generates a tension in our overall view of the world”. 5 One further clarification is in order. We as theorists are trying to explain why it is that the beliefs in question are in good epistemic standing, but a successful explanation need not be one that’s available to all thinkers whose beliefs are in good standing. After all, it may very well be that, on the correct account of the conditions under which a belief is in good standing, a thinker’s belief can be in good standing even if she can’t demonstrate that it is. 4 what’s needed is a conventionalist theory that remains tenable even in the face of these at- tacks, and so I plan to show how to construct such a theory. Of course, even if a theory of the right kind can be constructed, there remains the ques- tion of whether conventionalism is worth saving in the first place. After all, the convention- alist strategy, viable or not, is undeniably radical—if some less radical approach can be made to work, we may have reason to avoid conventionalism. But since my task is, again, a defen- sive one, I’ll mostly be leaving this question to the side. That is, in the main body of the text I’ll for the most part just assume that the conventionalist approach, if it can be shown to be viable, is worth pursuing. Here in the introduction, though, I’d like to take the opportunity to offer some support for that assumption. In particular, I’m going to argue for the following claim: conventional- ism is worth pursuing for the simple reason that there exists no other promising strategy for vindicating our apparently a priori beliefs. 2 Possible approaches Once again, we believe, pretheoretically and with certainty (or at least near-certainty), that all vixens are foxes, that 1 + 1 = 2, and that it either is or is not the case that all ravens are black, and we take these beliefs about the world to be in good epistemic standing despite the fact that they don’t seem to enjoy empirical support in proportion to our confidence in them. That is, we take these beliefs (and others like them) to be a priori in a fairly robust sense: not just permissible as a starting point but also immune—or at the very least highly resistant— to empirical defeat. And though my preferred account of the epistemic standing of such beliefs goes via the claim that their truth is a matter of linguistic convention, competing accounts certainly exist. What I want to do here, then, is to canvass the available theories of the epistemology of apparently a priori beliefs, explaining briefly why no such theory—with the exception of conventionalism—is a promising foundation for the kind of vindicatory project just described. 5 Note that I’ll be restricting my examination, for the most part, to theories of justifi- cation—it will generally be obvious how my discussion of these theories can be adapted to apply to analogous theories of blamelessness and warrant. So: what sorts of accounts are available of the justification of (a given class of) apparently a priori beliefs about the world? Any such account either will provide some story of how we manage to be in contact with the relevant parts of the world or will explain why justification doesn’t require that we be in any such contact. And it will be useful to categorize theories according to what they say here. The available theories, then, can be sorted into the following broad classes: • error theories, on which our apparently a priori beliefs turn out not to be justified a priori, since any contact we have with the relevant parts of the world is in fact empiri- cal; • rational insight theories, on which we possess some mechanism for reaching out, in some nonempirical way, into the relevant parts of the world; • easy justification theories, on which contact isn’t required for justification; and • mind-dependence theories, on which our contact with the relevant parts of the world is explained by the fact that what those parts of the world are like (or, at least, what counts as a correct description of those parts of the world) somehow depends on our mental states. This list exhausts the logical space—all theories fall into one of these categories. (Conven- tionalism, of course, is a member of the last category.) I discuss them in turn. Error theories First up are error theories, according to which the apparently a priori beliefs we’re theorizing about are, in fact, not justified a priori. This leaves open two possibilities: either the beliefs in question aren’t justified at all, or their justification is empirical. Little needs to be said about the first possibility. A theory according to which the relevant 6 beliefs aren’t in good epistemic standing at all is a skeptical theory, and on such a theory, it’s obviously going to be impossible to vindicate those beliefs. No skeptical theory, then, can ground our vindicatory project—if any such theory is correct, the project is doomed to failure. If, on the other hand, the beliefs we’re theorizing about are justified on empirical grounds, there’s some hope for our vindicatory project. In that case, after all, we do have justification for these beliefs—it’s just that we’re wrong about the basis of that justification. So this second possibility is a bit more interesting. There’s a vast literature addressing the possibility that our apparently a priori beliefs are in fact justified empirically, and I can’t hope to do justice to it here. But the following is worth pointing out: Any belief, if it’s justified empirically, is also going to be empirically defeasible, at least in principle, since any evidence a thinker has in favor of the relevant belief at a particular time may at some future time be outweighed by new evidence against it.6 And if that’s right, then as long as at least one of our apparently a priori beliefs turns out to be exempt from empirical defeat, it must be that there exist beliefs that are justified a priori, in which case some explanation is going to be needed of how a priori justification is possible—we won’t be able to rely on error theory as a blanket response to apparent apriority. This is significant for two distinct but related reasons. First, various attempts have been made over the years to show that, for structural reasons having to do with how inquiry pro- ceeds, at least some of our beliefs must be empirically indefeasible.7 (I’m not entirely con- 6 I want to note two possible exceptions to this claim. First, it may be that a thinker’s beliefs about how things currently seem to her are indefeasible. For the purposes of my discussion here, though, we can ignore such beliefs, since they’re not plausible candidates for apriority anyway. And second, on certain radically permissive epistemological theories, such as the sort of global coherentism sometimes attributed to W. V. O. Quine (albeit wrongly—see my 2017: §2), no beliefs are empirically defeasible. After all, on such theories, evidence bears on entire systems rather than on individual beliefs, and so, whatever evidence a thinker receives, she will always be permitted to to maintain a given belief by revising other beliefs in the system. But if we rely on an appeal to any such epistemological theory, our vindicatory project will be incomplete: any radically permissive theory, by its nature, sanctions a wide variety of incompatible sets of beliefs, which means that the mere fact that our beliefs in a given domain count as justified on a theory of this sort isn’t going to help us explain how we manage to get at the truth in that domain. (See the below examination of easy justification theories for a more detailed discussion of a similar problem.) 7 See, e.g., Hartry Field 2000 and David Chalmers 2011. 7 vinced by such arguments, but if even one of them turns out to be compelling, some account will be needed of the possibility of a priori justification.) And second, structural considera- tions aside, there are beliefs (such as the examples motivating my discussion here) that we just don’t seem to regard as defeasible by empirical evidence.8 And if that’s right, then error theories on which apparently a priori beliefs are in fact justified empirically are more revi- sionary than we might have thought: they entail not only that we’re wrong about the basis of the justification of our apparently a priori beliefs but also—more importantly—that our un- shakable confidence in those beliefs is misplaced. In other words, these theories entail that the methods of belief revision we employ are systematically flawed, in that we routinely treat as exempt from doubt beliefs that are in fact subject to empirical defeat. So such theories, even if they don’t have skeptical implications, can offer no more than a partial vindication of any of our apparently a priori beliefs. I suggest that we shouldn’t accept this result if there’s a chance we can do better. Rational insight theories Now for theories on which the beliefs we’re theorizing about are justified a priori. The most straightforward of these are rational insight theories—on such theories, our ability to get at the truth in the domains in question is explained by our possession of a faculty of rational intuition that somehow allows us to reach out and grasp what the relevant parts of the world are like. The usual analogy here is with sense perception: the posited rational faculty is taken to be something like a sensory faculty, except that it operates in nonempirical realms.9 On the supposition that we do in fact possess such a faculty, it’s not going to be difficult to give a reasonable account of a priori justification by analogy with our preferred theory of perceptual justification: presumably, the two sorts of justification work in much the same way, except that the role sensory experience plays in the latter is played in the former by the deliverances of our rational faculty. 8 For more examples of beliefs of this kind, see, e.g., Hilary Putnam 1978 and Christopher Hill 2013. 9 See, e.g., Kurt Gödel 1964/1983: 483–484. 8 But rational insight theories, though they enjoy some popularity in certain quarters, are completely untenable, for reasons that are numerous and familiar. Here I mention only what I take to be the most serious problem: these theories are incompatible with our best science. After all, the available evidence suggests that we have no such faculty, nor does there exist any plausible story about how we could have evolved one. Furthermore, according to our best physics, the physical world is causally closed, in which case there’s no possibility that we can possess a faculty putting us into anything resembling perceptual contact with (say) an abstract realm of mathematical objects.10 In the face of all this, appeals to rational insight are akin to appeals to magic, and approximately as credible. Easy justification theories The basic problem with anything resembling a rational insight theory is just that it’s ex- tremely difficult, within any naturalistically respectable framework, to tell a credible story about how we could possibly be in any sort of nonempirical contact with parts of the world outside ourselves.11 (The lack of credibility is especially obvious in the case of a theory that says that we somehow reach out and directly grasp the relevant parts of the world, but that 10 Cf. Paul Benacerraf ’s (1973) objection to mathematical platonism and Field’s (1989a) refinement of that objection. It’s also worth noting that some rational insight theorists, recognizing the problem identified here, downplay the analogy with perception, claiming instead that our rational insight into what the relevant parts of the world are like is explained by the fact that those parts of the world are, in some metaphysically robust sense, literal ingredients of our thoughts. Laurence BonJour (1998: 184–185), for instance, suggests that “having a thought whose content is…the claim that nothing can be red and green all over at the same time involves being in a mental state that instantiates a complex universal of which the universals redness and greenness are literal constituents”. I admit to some degree of mystification as to what the view here is supposed to be, but insofar as I can understand it, it doesn’t solve the problem so much as it slightly relocates it. What’s needed, after all, is an explanation of our ability to access certain parts of the world, and if the explanation is that thoughts with the relevant content are literally constituted by those parts of the world, a further account is needed of how thinkers can distinguish thoughts that genuinely have this content from thoughts (or pseudo-thoughts) that only seem to have it (or at least of why most of our thoughts that seem to have contents of the relevant sort genuinely do have those contents). So, if we’re going to avoid the conclusion that our reliability about what the relevant parts of the world are like is just a coincidence, an account is still needed of how we manage to be in touch with those parts of the world. And if that’s right, we haven’t really made any progress. 11 Here and elsewhere in the text, when I appeal to naturalism or to naturalistic respectability, what I am advocating is only a minimal sort of methodological naturalism according to which our theories of belief formation should respect the following obvious fact: that, since our cognitive mechanisms are causal, physical mechanisms, all beliefs—and changes in belief—are going to have physical causes. (Cf. Jared Warren 2017a: sect. 5.) 9 particular claim isn’t really needed to generate the worry here—the worry, at its core, is just that it’s hard to generate a reasonable explanation of how we could be in any sort of contact at all with the nonempirical realms our beliefs are about.) In the face of this problem—call it the contact problem—an alternative strategy begins to seem attractive: if we start with an epistemological view according to which contact isn’t required for justification, we may be able to divorce the question of justification from the question of contact and so to sidestep the contact problem altogether. This is precisely the strategy behind easy justification theories. These theories don’t work by identifying some putative mechanism or method by which we manage to be in contact with parts of the world our access to which would otherwise be mysterious—they’re designed to account for a priori justification without positing any such mechanism. Consider, for ex- ample, a toy version of reliabilism according to which what explains the justification of a given belief is simply that the cognitive process by which we came to have it is one that (in the actual world) tends to produce true beliefs. This view makes available something like the following story: the vast majority of (say) our mathematical beliefs are true, and that tells us that the process by which we arrived at those beliefs, whatever it is, tends to produce true beliefs, which means our mathematical beliefs are justified. And this story, notice, says noth- ing about how we manage to be in contact with the facts about mathematical objects. In fact, no contact is required at all: even if it’s just an accident that the relevant cognitive process tends to produce beliefs that correctly describe this mathematical realm, the fact remains that this process reliably produces true beliefs, and so those beliefs are justified. Though there are lots of problems with this sort of crude reliabilist epistemological the- ory, we can safely ignore most of them, since they’re not relevant for my purposes here. What’s important is that any account of apriority that relies on such a theory will be, on its face, unsatisfactory for much the same reason that the nonskeptical error-theoretic accounts described above are unsatisfactory: it won’t be able to give us the sort of guarantee we’re after that (e.g.) our basic logical beliefs are true. After all, any reliabilist epistemological theory, 10 if it’s to be even minimally plausible, must be a theory of defeasible justification—it’s possi- ble for a thinker, despite having in fact arrived at some beliefs by a reliable process, to gain compelling (but misleading) evidence that the process in question is not reliable.12 And if that’s right, then a reliabilist account, like an error-theoretic account, can do no better than to partially vindicate our apparently a priori beliefs, since such an account will entail that our unshakable confidence in those beliefs is misplaced. Furthermore, this problem isn’t specific to reliabilist epistemological theories. It arises on those theories simply because the conditions under which a given belief counts as justified according to those theories include a requirement that the belief stand in a particular kind of relation to a part of the world outside ourselves, a part to which—by hypothesis—we need not be in contact in order for the belief to count as justified. Given any plausible condition on justification that’s radically externalist in this way, it will be possible, at least in principle, for a belief to meet that condition while the thinker has compelling (but misleading) evi- dence that the belief doesn’t meet the condition. But such evidence, on any plausible theory of justification, will be a defeater—the belief, despite meeting the condition, won’t count as justified. So any theory that relies on the sort of radically externalist condition on justifica- tion just described will, in order to be plausible, be a theory only of defeasible justification, which means it will only partially vindicate our apparently a priori beliefs. Now, it’s true that there are other easy justification theories that aren’t so radically exter- nalist as our crude reliabilism, and some of these can account for the indefeasible justification of certain of our apparently a priori beliefs. But even on such theories, the explanation of the indefeasibility of the beliefs in question has nothing to do with their truth. Consider, for instance, the pragmatic view developed by Boghossian (2003a, 2003b), according to which what explains why certain of our logical beliefs count as justified is that they’re concept- constituting for concepts (such as the concept of the conditional) the possession of which is a precondition of responsible reasoning. (The details of the proposal aren’t relevant for our 12 Cf. Alvin Goldman 1979. 11 purposes here, but Boghossian’s argument, very roughly, is that our procedure for responsi- bly designing our concepts—i.e., designing them in such a way that mere possession of them doesn’t carry with it substantive theoretical commitments—requires the use of conditionals, in which case we can’t design our concept of the conditional via this very procedure, since we must already have that concept in order to engage in the procedure at all.13 ) It’s plausible that this view entails that the justification of the logical beliefs in question is indefeasible: if it’s re- ally the case that having those beliefs is a precondition of responsible reasoning, then it’s hard to see how a responsible reasoner could ever be induced to give them up. The explanation of this indefeasibility, though, has nothing to do with the truth of these beliefs—their indefea- sibility doesn’t depend on there being any evidence whatsoever (empirical or otherwise) for their truth. The explanation of their indefeasibility is instead procedural: they count as in- defeasibly justified simply because they are, in a particular sense, nonoptional. So, although we do have indefeasibility according to Boghossian’s view, we have it on a technicality—the kind of justification on offer here is of a minimal, procedural sort. And given that the beliefs in question are (at least seemingly) among our most secure, an account on which we have only this procedural sort of justification is, I take it, profoundly unsatisfying. We might hope for something more robust. 13 Boghossian’s pragmatic account is the most promising of his various attempts to substantiate the claim, introduced in his “Analyticity Reconsidered” (1996), that a sentence can be such that grasp of its meaning is sufficient for justified belief in the truth of what it says even if the sentence doesn’t owe its truth purely to its meaning—i.e., that a sentence can, in his terminology, be epistemically analytic even if it’s not metaphysically analytic. (His larger project is to provide an analytic theory of the a priori without relying on the claim that the truth value of a sentence can be a matter of linguistic convention alone, which claim he takes to be absurd on the basis of the objection discussed in Chapter 2 below.) And since it has become common, among theo- rists appealing to analyticity, to insist that they are committed only to this epistemic variety (see, e.g., David Chalmers 2012: exc. 17 and Amie Thomasson 2015: chap. 7), it’s worth pointing out that any view on which sentences can be epistemically analytic but not metaphysically analytic will have to rely on an easy justification theory of one kind or another: unless a given sentence is true purely in virtue of its conventionally chosen linguistic meaning—i.e., unless that sentence is metaphysically analytic—the fact that our use of the sentence is governed by particular linguistic conventions can have no role to play in explaining how it is that we’re in contact with the parts of the world relevant to the truth value of what the sentence says. As a result, any attempt at vindicating our apparently a priori beliefs by appeal to mere epistemic analyticity will be subject to the prob- lems under discussion in this section. (Boghossian himself, it’s worth noting, has recently concluded (see, e.g., his 2016, 2017) that none of his attempts to show that a sentence can be epistemically analytic without being metaphysically analytic turns out to be workable, both for reasons related to those I discuss here and for other reasons—he has turned instead to the project of explaining a priori justification by appeal to some version of a rational insight theory.) 12 As before, the problem here isn’t specific to the particular view I’ve chosen to describe. Plausibly, any easy justification theory—if it’s not a radically externalist theory like our crude reliabilism—will have to rely on a procedural sort of justification in much the same way Boghossian’s account does; there just don’t seem to be any other options here. So no such theory will be able to provide us with a satisfying vindication of our apparently a priori beliefs. I’ve given some reasons here to think that no easy justification theory can successfully account for the a priori justification of the beliefs we’re interested in. But there’s an even more significant problem here: even if such a theory could successfully account for the a priori justification of these beliefs, our vindicatory project would remain incomplete, since we wouldn’t yet have an explanation of how we manage to be reliable about the matters in question. Furthermore, neither externalist theories like our crude reliabilism nor procedu- ral theories like Boghossian’s so much as point us in the direction of any such explanation. And this is just a result of the same separation of justification from contact that makes an easy justification theory available in the first place: the very fact that our account of the jus- tification of the beliefs in question proceeds without any appeal to our ability to access the relevant parts of the world ensures that, even if this account were successful, it could be of no help in explaining our reliability. Here’s another way of putting the point: When we engineer our theory of justification in such a way that we can give an account of a priori justification without solving the contact problem, we don’t thereby eliminate the need to solve the contact problem. That problem remains pressing even if we don’t need to solve it in order to account for the justification of our beliefs—we still need to explain our reliability, after all, and in order to do so we must say something about the problem. And if that’s right, then easy justification theories can’t give us the resources to fully vindicate our apparently a priori beliefs. In fact, it’s plausible that something stronger is true: no easy justification theory can be of any use whatsoever to our vindicatory project. What motivates such theories, after 13 all, is that they allow us to proceed without solving the contact problem, but as we’ve just determined, any explanation of our reliability will need to address that problem anyway. My suggestion, then, is that, if we successfully explain how we manage to be reliable about the matters in question (which, again, we must do in order to complete our vindicatory project), we’ll thereby have made it unnecessary to adopt an easy justification theory in the first place—we’ll have an answer to the contact problem and so will be able to rely on a more robust theory of justification. (Note: this isn’t to say that there can be no reason to adopt reliabilism or some other easy justification theory—the point is just that such theories aren’t going to be of any particular help to us as we try to complete our vindicatory project. There may very well be other (unrelated) motivations for adopting a theory of this sort.) This suggestion, though, invites a worry, and so I want to take a moment to consider it. The worry is as follows: There may be ways of responding to the contact problem that can ground an explanation of our reliability but are somehow not of the right sort to allow us to adopt any robust theory of justification. And if we embrace any such response to the contact problem, a disjunctive approach to our vindicatory project will be needed—we’ll need to pair an easy justification theory with a completely separate account of our reliability. So it may turn out that easy justification theories are far from useless; we may need such a theory in order to complete our vindicatory project. So: of what sort might a response to the contact problem be such that it does give us the resources to explain how we manage to be reliable but doesn’t give us the resources to mount a robust theory of justification? One thought is that we might be able to respond to the problem simply by showing that reliability, like justification, can be explained without any appeal to contact at all—if such a response were available, it certainly wouldn’t allow us to mount a robust theory of justification, since to give such a response wouldn’t be to solve the contact problem but only to sidestep it once again. And certain theorists have suggested that some such response might indeed be available. In particular, they have suggested that we might, if we take for granted the truth of the relevant beliefs—which we’re licensed to 14 do, on an easy justification theory—get an explanation of those beliefs’ reliability for free. In order to explain our beliefs’ reliability, the story goes, it’s sufficient to explain how it is that those beliefs are sensitive to the facts in the relevant domains. (Or at any rate, we can suppose for the sake of argument that it’s sufficient—there may be other conditions that need to be met, but they won’t be important for my discussion here.) And for our belief in some proposition to be sensitive to the facts is just for it to be the case that, if the proposi- tion weren’t true, we wouldn’t believe it.14 But the propositions we’re interested in—logical, mathematical, etc.—are necessarily true. Their negations, then, are necessarily false. And on the standard semantics, counterfactuals with necessarily false antecedents are trivially true. So it’s trivial, for each of the propositions we’re interested in, that, if that proposition weren’t true, we wouldn’t believe it. And if all that’s right, then our beliefs about what the relevant parts of the world are like—given that they’re true—are trivially sensitive to the facts about those parts of the world, which means they’re trivially reliable.15 And again, since this ex- planatory story proceeds without any appeal to contact, it doesn’t give us the resources to mount any theory according to which contact is required for justification. So, if this story is satisfactory, we’ve got a response to the contact problem—call it the necessity response—that can’t ground a robust theory of justification for the beliefs in question. This explanatory story, though, is not satisfactory, for at least two reasons. First, it simply doesn’t explain what needs to be explained. What’s needed, recall, is not just a demonstration that our beliefs are reliable but an explanation of how. And no such explanation has been provided here—even if we grant that this story is correct, how we manage to be sensitive to parts of the world with which we’re not in contact remains mysterious.16 14 This way of understanding sensitivity was introduced by Robert Nozick (1981: chap. 3). 15 This story is suggested by some of David Lewis’s comments in On the Plurality of Worlds (1986: sect. 2.4); it has recently received explicit defense from Justin Clarke-Doane (see, e.g., his 2016). 16 One might be tempted by the following sort of argument: There is certainly a causal explanation available of how we came to believe that (say) 1+1 = 2. Furthermore, the necessary truth of the proposition that 1+1 = 2 isn’t in need of any explanation—necessities don’t require explanation in the way that contingent truths do. So we have all the explanation we should want both of our belief that 1 + 1 = 2 and of the necessary truth of that proposition. But our believing the proposition and its being necessarily true, taken together, just entail (again, on the standard semantics for counterfactuals) that, if 1 + 1 were not equal to 2, we wouldn’t believe it to be. So we have an explanation of our sensitivity after all. 15 I take it that this problem is decisive on its own, but I am, admittedly, relying here on claims about what does and doesn’t count as an explanation, claims that might be considered worrisome—the notion of explanation is, after all, notoriously difficult to pin down. The second problem with the necessity response, though, is equally decisive, and doesn’t rely on any claims about explanation. It is simply that, standard counterfactual semantics aside, it’s clear that, at least in certain contexts, we can (and often do) say nontrivial things about what would be the case if some necessary truth were false; as Field has pointed out, we can say, perfectly intelligibly and truly, that (for instance) “if the axiom of choice were false, the cardinals wouldn’t be linearly ordered, the Banach-Tarski theorem would fail and so forth” (1989b: 237–238). The fact that the standard counterfactual semantics tells us otherwise indicates nothing more than that counterfactuals, on the standard semantics, fail to capture what we’re thinking about when we think about sensitivity.17 To appeal to that semantics in order to claim that sensitivity is trivial, then, is just a mistake. So the necessity response fails: no satisfactory story has been provided on which we get an explanation of our reliability in the relevant domains for free. Is there available any other response to the contact problem that can ground an explana- tion of our reliability in the relevant domains but isn’t of the right sort to ground a robust theory of justification? I have my doubts—it’s not clear to me how such a response might go, nor am I aware of any attempts in the contemporary literature to develop such a response. (Schechter is, in a way, an exception—he appears to favor a disjunctive approach, endorsing both an easy justification theory for our logical beliefs and an evolutionary explanation of One obvious problem here is that it’s not at all clear that necessary truths don’t require explanation. But there’s an even more fundamental problem with this argument: it’s not valid. From the fact that we can explain both p and q and the fact that p and q entail r, it simply doesn’t follow that we can explain r; explanation isn’t closed under entailment. (This point is due to Schechter—see his 2010 for further discussion, including some counterexamples to explanatory closure.) So it may well be that we can explain both why we believe that 1 + 1 = 2 and why that proposition is necessarily true but nevertheless can’t explain our sensitivity. 17 There are various ways of trying to develop a nonstandard semantics on which these counterfactuals do capture what we’re thinking about. (One obvious strategy is to modify the standard semantics by allowing impossible worlds (see, e.g., Nolan 1997).) But we need not say anything definitive here about how, exactly, the notion of sensitivity is to be understood. What’s important is just that it’s not to be understood in terms of the standard counterfactual semantics. 16 our logical reliability.18 I’m not entirely sure what to make of his view, but it’s worth noting that he himself acknowledges that it’s not generalizable: even if an evolutionary explanation can be given of our logical reliability—perhaps because logical facts are somehow embed- ded in the physical world—no similar explanation seems to be available of our reliability in other a priori domains such as mathematics (2010: 455–456). So Schechter’s approach, even if successful, can provide vindication for only a fraction of the beliefs we’re interested in vindicating.) But in any case, I take it that we have good reason to reject any such expla- nation. After all, even if such an explanation can be made to work, the fact remains that it will need to be paired with an easy justification theory. And if the arguments I gave above are right, then easy justification theories—whether radically externalist or procedural—just aren’t going to be able to provide us with a satisfying account of the indefeasible justification of the beliefs we’re interested in. Mind-dependence theories If the prospects for the foregoing approaches are as dim as I’ve suggested they are, the only theories left standing at this point are those on which the truth values of our apparently a priori beliefs are somehow dependent on our mental states. On such theories, after all, it’s relatively easy to account for our sensitivity to what’s really the case in the relevant parts of the world—if a thinker’s mental states are responsible for what counts as a correct descrip- tion of a particular domain, it’s no mystery how she has nonempirical access to the truth in that domain (at least in principle; there are lots of details to be worked out, of course). So a strategy for vindicating the beliefs we’re interested in by appeal to a mind-dependence theory has promise: unlike the other sorts of theories we’ve canvassed, mind-dependence 18 Schechter’s epistemological view—developed over several papers, both with Enoch (Schechter and Enoch 2006, Enoch and Schechter 2008) and alone (in press)—is that the justification of our basic inferential practices (and so, derivatively, of our logical beliefs) is to be explained by appeal to a pragmatic theory in the spirit of Boghossian’s. (Though there are important differences between Enoch and Schechter’s view and Boghossian’s, these differences aren’t relevant for my purposes here.) But he has also developed, in parallel, an entirely in- dependent account of our ability to be reliable about logic, an account based on claims about the evolutionary advantages of logical reliability (see his 2010 and his 2013a). 17 theories seem able, in principle, to offer a satisfying account both of those beliefs’ a priori justification and of our ability to be highly reliable in the relevant domains. Conventionalism, as I suggested above, is a species of mind-dependence theory. But it’s not the only one. There are also Kantianism and the various flavors of metaphysical ideal- ism. That said, there’s little need to spend time assessing the relative merits of these different approaches to mind-dependence—there seems to be wide agreement that conventionalism is the most promising of these theories by far. (As a matter of historical fact, the rise of con- ventionalism in the first half of the twentieth century was due in part to the failure of the Kantian program in the face of the non-Euclidean spacetime geometry described by Ein- stein’s general theory of relativity. And as for idealism: its relative lack of respectability is evident from the fact that opponents of conventionalism sometimes try to discredit conven- tionalists by accusing them of being idealists in sheep’s clothing, as I discuss in Chapter 2 below.) So I’ll just be taking as given that, insofar as my discussion here has given us reason to accept any mind-dependence theory at all, it’s given us reason to accept conventionalism. It’s clear enough at this point, I take it, that conventionalism’s prospects are worth explor- ing: it promises to make available a way forward for our vindicatory project by allowing us to avoid the problems that beset error theories, rational insight theories, and easy justification theories. Recall, though, that conventionalism itself is widely considered hopeless—it has, or is at least thought to have, insurmountable problems of its own. And if those problems turn out to be as serious as the problems conventionalism allows us to avoid, we’ve got to either go back to the drawing board or just accept that our vindicatory project is doomed. So my plan in this text, as I’ve suggested, is to show that the problems for conventionalism are in fact not very serious and that a conventionalist theory can be developed that remains plausible even in the face of them. In the final section of this introduction I briefly discuss what those problems are supposed to be and how I plan to handle them. 18 3 What needs doing The first thing to note here is that conventionalism is, in the first instance, a theory about the truth values of sentences—linguistic conventions operate on linguistic expressions, after all. But the beliefs we’re interested in aren’t beliefs about sentences: the goal is to vindicate the object-level belief that all vixens are foxes, not just the metalinguistic belief that the sentence “All vixens are foxes” is true. So, even if we can explain how it is that certain sentences are true by convention, our work isn’t necessarily done: we also need to answer the question of what the relationship is between those sentences and the beliefs we’re trying to vindicate. In a way, the answer to this question is utterly obvious—what (e.g.) the sentence “All vixens are foxes” says is that all vixens are foxes, which means the sentence is true just in case all vixens are foxes. But behind this obvious answer lurks a worry that many theorists have taken to be fatal for conventionalism: How can it be that the sentence “All vixens are foxes” is true by convention alone and is true just in case all vixens are foxes? Isn’t whether all vixens are foxes simply a matter of how things are out in the world? The objection here is roughly this. Since a sentence is true only if what it says is true, the only way for a sentence to be true by convention is for what it says—i.e., the proposition it expresses—to itself be true by convention. The sentence “All vixens are foxes”, for instance, can be true in virtue of convention alone only if our conventions have the power to make it the case, without input from the world, that all vixens are foxes. But this latter claim, it’s widely thought, is clearly false. And if that’s right, no sentence can be true by convention, which means the conventionalist project is doomed. In Chapter 2 I address this argument. In particular, I produce a precise reconstruction of the argument and use that reconstruction to show that the argument is right to at least this extent: conventionalists are indeed committed to the claim that conventions have the power to make it the case that (e.g.) all vixens are foxes.19 The question, then, is what grounds 19 In a way, it should already be clear from our discussion in §2 above that conventionalists are committed to this claim. Their goal, again, is to vindicate our object-level a priori beliefs, and we determined in our dis- cussion that the only viable strategy for vindicating our a priori beliefs involves taking their truth values to be 19 there are for rejecting that claim. And the answer usually given is that it has various absurd implications, implications that are in fact inconsistent with certain of the propositions we take to be a priori.20 (Here’s an example: We take ourselves to know a priori that necessarily, all vixens are foxes. But the claim that our conventions make it the case that all vixens are foxes is often taken to entail that, if our conventions were different in a particular sort of way, then all vixens would not be foxes.) As I explain, though, this answer is (as Crispin Wright (1985) and Alan Sidelle (1989, 2009, 2010) have shown) based on a misunderstanding: the claim in question, properly understood, doesn’t have any of the absurd implications usually attributed to it. And if that’s right, then there aren’t genuine grounds for rejecting that claim, in which case there remains a way forward for the conventionalist project. But the fact that there’s such widespread confusion about the implications of conven- tionalism points to a another problem: conventionalists have never succeeded in precisely formulating their view. In particular, they haven’t explained just what they mean when they say that a sentence is true purely in virtue of convention—what exactly is this purely-in- virtue-of relation they’re appealing to? So I address this question as well, providing some clarity about conventionalists’ commitments here by thinking about what features truth by convention would need to have in order for conventionalism to play the role it’s intended to play in our vindicatory project. (This work is developed further in the chapter’s appendix, where I provide formal characterizations of the in-virtue-of relations relevant to convention- alism and show how those characterizations can be used to prove some of the assumptions mind-dependent. Conventionalists, then, must take the truth values of the relevant object-level propositions (including the proposition that all vixens are foxes) to be a matter of convention. 20 Another answer is sometimes given: that the claim should be rejected not because of any purported im- plications but because it’s just absurd on its face, an affront to common sense. I don’t discuss this answer in the chapter because there’s little to say about it other than that it has no merit. The upshot of §2 above, after all—and what was noticed by Kant, the logical empiricists, and anyone else who ever saw the need for a mind- dependence view—is that what makes mind-dependence worth considering is that no other promising (non– error-theoretic) way of accounting for the phenomenon of apparent apriority has ever been discovered, nor is there reason for hope that one might be discovered in the future. In this context, to dismiss mind-dependence by appeal to common sense is simply unphilosophical. Kant neatly sums up the thought here: “To appeal to common sense when insight and science fail, and no sooner—this is one of the subtle discoveries of modern times, by means of which the most superficial ranter can safely enter the lists with the most thorough thinker and hold his own” (1783/2001: pref.).) 20 I rely on in my arguments in the chapter itself.) That said, my discussion in Chapter 2 doesn’t complete the task of making precise the no- tion of truth by convention—to do so would require a fully developed metasemantic theory. So in Chapter 3 I show how to construct such a theory. In particular, I argue that conven- tionalism entails both a kind of use theory of meaning (i.e., a view on which an expression’s content is somehow determined by the way speakers use that expression) and a kind of de- flationary understanding of truth and related notions (i.e., a view on which these notions don’t have a foundational explanatory role to play in our semantic theorizing). This much, I take it, is relatively uncontroversial—linguistic conventions, after all, are just conventions about how to use particular expressions in various situations, and we’ve already determined that the truth value of a sentence can’t be a matter of convention alone if truth is understood as a matter of correct representation of a world whose features are entirely independent of our language. But I also argue that, more surprisingly, the entailment runs in the opposite direction as well. That is, anyone who accepts both a certain sort of use theory of meaning and a certain sort of deflationary theory of truth-theoretic notions is thereby committed to conventionalism: when our conventions of use for a sentence direct us to regard it as true without consulting the world, there can be no further question, for a use theorist who en- dorses deflationism, of whether that sentence (or the associated proposition) is really true, since the circumstances in which a sentence (or its associated proposition) counts as true are fully determined by our conventions of use. The conjunction of use theory and deflationism, then, just is a version of conventionalism. Or so I argue. Even if this is correct, though, it’s not entirely obvious that our vindicatory project is complete. On the usual use-theoretic views, after all, there remains the possibility that our conventions of use for an expression fail to determine any legitimate content at all—there are constraints on what sorts of patterns of use can be admitted into the language. (The usual example of a purportedly inadmissible expression is Arthur Prior’s (1960) infamous tonk.) And in that case, some explanation is needed of how we can be sure that the expressions 21 of our actual language do in fact meet these admissibility constraints—after all, if we can’t even be sure that (e.g.) the sentence “All vixens are foxes” is genuinely meaningful, then our confidence that it’s true is surely misplaced. So I close, in Chapter 4, by addressing this concern. I argue that none of the usual ways of motivating such constraints is available to theorists who accept the conjunction of deflationism and use theory—they’re committed to denying that any such constraints exist. So they’re committed to the view that tonk and similar expressions are perfectly legitimate. What explains the absence of such expressions from our language, I argue, isn’t that they’re inadmissible; it’s just that the language would be far less useful if it included them. There’s no real way, then, for our conventions of use for an expression to fail to determine any legitimate content for that expression, which means we can be sure that sentences we accept as true as a matter of convention are indeed true. The work to be done in this text, then, is as I’ve laid it out here. What’s left now is to do this work. 22 Chapter 2 Linguistic Convention and Worldly Fact Prospects for Truth by Convention “Father,” said one of the rising generation to his paternal progenitor, “if I should call this cow’s tail a leg, how many legs would she have?” “Why five, to be sure.” “Why, no, father; would calling it a leg make it one?” —Rev. E. J. Stearns, A. M. 1 Conventionalism and its discontents Conventionalism about sentences of a given sort is the doctrine that those sentences, in some sense to be specified, are made true and false by, or owe their truth values wholly to, linguistic convention.1 It has traditionally been invoked by empiricists to explain our apparent abil- ity, in certain domains, to form true beliefs with near-perfect reliability, despite not having observational access to those domains.2 And it’s easy to see why the doctrine is appealing: 1 The notion of truth by convention came into its own in the work of the logical empiricists, especially Rudolf Carnap’s Logical Syntax of Language (1934/1937), where he developed a conventionalist approach to logic, mathematics, and analytic sentences more generally. But as far as I can tell, the notion has its roots in Henri Poincaré’s work (in, e.g., his 1902/1905) on the foundations of geometry (which isn’t to say there are no precursors; see, for instance, Locke’s (1690: bk. 4, chap. 8) discussion of “trifling Propositions”, which, though true, “bring no Increase to our Knowledge”). Poincaré’s own view, it’s true, is that the axioms of a conventionally chosen geometric system aren’t cor- rectly regarded as true at all—convenient, yes, but no more true than the axioms of any alternative system. Yemima Ben-Menahem (2006) makes much of this aspect of Poincaré’s view, but given the availability of a deflationary notion of truth, it’s not clear that this is really a substantive difference between Poincaré and later conventionalists. 2 Conventionalism has also been invoked to explain our knowledge of those domains, our ability to have justified beliefs about those domains, etc.—indeed, this is how it’s usually presented, as a way of developing an analytic theory of a priori knowledge or justification. But I’ve chosen to focus here on reliability, for the following reason: It may be possible to defend a thin conception according to which (e.g.) justified belief is 23 how we could possibly be in contact with (say) an abstract realm of mathematical objects is, for those of us with empiricist or naturalist sympathies, hopelessly mysterious, and by endorsing conventionalism about mathematics we eliminate the need to posit any such con- tact.3 After all, if truths in a given domain are true by linguistic convention, our access to those truths need not be explained in terms of contact with any part of the world, abstract or otherwise—it’s explicable just in terms of our linguistic competence. Such, anyway, is the promise of conventionalism. But the story certainly needs some filling in. If conventionalism in a given domain is to be tenable as a solution to our episte- mological mystery, a plausible account is needed of how (and in what sense) we can, just by adopting certain linguistic conventions, make it the case that claims in that domain have the particular truth values they do. Unless such an account is forthcoming, to endorse conven- tionalism is just to exchange one mystery for another. In fact, the situation is worse than this. Certain objections have been taken to show that no satisfactory account of truth by convention is even possible, and as a result, convention- alism has largely been abandoned by mainstream analytic philosophy.4 easier to come by than we might have thought—see, e.g., the pragmatic view defended by Paul Boghossian (2003a, 2003b), according to which what explains the justification of certain of our logical beliefs is (roughly) that we must have them in order to reason responsibly at all. If such a conception is available, conventionalism may not be needed to account for our ability to have justified beliefs in the relevant domains, since we may be able to provide an account without explaining how we manage to get at the truth in those domains. But it’s not clear that we can account for our reliability in any similar way, since no analogous thin conception of reliability seems to be available—to explain our reliability just is to explain how we manage to get at the truth. Focusing on reliability, then, is one way of bringing out what’s especially attractive about conventionalism as compared to its alternatives. (In fact, conventionalism is the only view that has ever provided any real hope of a satisfying explanation of our reliability in nonempirical domains, as we saw in Chapter 1 above.) 3 The mystery is especially obvious in the case of mathematics, but even in nonabstract domains, we appear able to reliably form true beliefs about unobserved parts of the world. For example, we haven’t seen every vixen in the world, but we still believe, correctly, that all of them are foxes—and that every vixen that will ever exist will be a fox, and even that that it’s impossible for any vixen to fail to be a fox. Indeed, observation seems to have no evidential relevance whatsoever in cases like this: it’s at least intuitive that the only people who can gain empirical evidence relevant to whether all vixens are foxes are those who have failed to fully understand the claim that all vixens are foxes (though this has, of course, been disputed—see, e.g., Gilbert Harman 1996 and Timothy Williamson 2003). 4 This isn’t to say that there are no contemporary conventionalists—Iris Einheuser (2006), Hans-Johann Glock (2003, 2008), Alan Sidelle (1989, 2009, 2010), and Jared Warren (2015a, 2015b, 2017b), to name a few, have in recent years issued defenses of conventionalism in various domains. But it’s a distinctly minority po- sition. (Manuel García-Carpintero and Manuel Pérez Otero (2009) also defend what they say is a version of conventionalism, but their view doesn’t count as conventionalist in the sense I’ve described here—they deny that any sentence can be true by convention alone. As a result, their view doesn’t solve the epistemological 24 Most famously, W. V. O. Quine, in his influential series of attacks on conventionalism (developed most fully in “Truth by Convention” (1936) and “Carnap and Logical Truth” (1960a), both of which are inspired in part by Lewis Carroll’s “What the Tortoise Said to Achilles” (1895)), has pointed out the following problem. On any interesting version of con- ventionalism in a given domain, the number of sentences true by convention is infinite, which means we, as finite beings, can’t explicitly have stipulated each of the relevant sen- tences to be true one by one. So, on the assumption that conventions are explicit stipula- tions, there must be a way of generating infinitely many particular truths from finitely many general ones. But in order to derive the relevant particular truths from the general ones, we must rely on laws of logic. So, unless the laws of logic are themselves true by convention, sentences in the domain in question are, at best, true by convention and logic. And the laws of logic can’t themselves be true by explicit stipulation—these laws, if given in the form of explicit stipulations, must themselves be general claims, which means that the same problem arises again: in order to apply them to particular cases, we must rely again on laws of logic. A regress threatens. So, if conventions are explicit stipulations, the laws of logic aren’t true by convention.5 This argument, I take it, is decisive against versions of conventionalism on which conven- tions are explicit stipulations. But as Quine well knew, it has no purchase against (arguably more plausible) views on which conventions are implicit: if (say) the validity of modus po- nens is owed not to the explicit stipulation of any general sentence but simply to competent speakers’ dispositions to reason in accordance with that rule, then there’s no chance for the regress to get going. So Quine turns to an entirely different argument against truth by im- plicit convention. It amounts to the following: When we eliminate explicit stipulation from our account of convention, it’s not clear that there remains any principled way to draw a distinction between sentences we accept as a matter of convention and those we merely take to be obviously true (or to draw a distinction between rules of inference we obey as a matter mystery conventionalism is intended to solve.) 5 See, e.g., Warren 2017b for a thorough discussion of this argument. 25 of convention and those we merely take to be obviously truth-preserving). And if no such distinction can be drawn, then conventionalism is an empty doctrine. It’s clear that this latter argument is in no way as conclusive as the argument against truth by explicit convention, for no positive case has been made for the claim that the relevant distinction can’t be drawn.6 So, while there’s certainly a challenge here for the notion of implicit convention, we don’t yet have any reason to think it’s a challenge that can’t be met. Notice, too, that it’s a challenge not just for truth by implicit convention but for any metasemantic theory according to which claims are sometimes accepted as a matter of im- plicit convention. And even opponents of conventionalism are often happy to endorse the claim that our willingness to accept certain claims as a matter of implicit convention plays some role in explaining how our words come to mean what they do. So, insofar as there’s a problem here, it’s not unique to conventionalism—many opponents of that doctrine are just as committed as conventionalists are to the existence of the distinction Quine is calling into question.7 This suggests that Quine’s critique, for all its renown, can’t be what accounts for conventionalism’s poor reputation. A look at the literature confirms this suspicion—the problem most often taken to be fa- tal for conventionalism isn’t either of the ones pointed out by Quine. Conventionalism fails, according to its opponents, not because we’re finite beings or because implicit conventions are metaphysically suspect but simply because sentences, since they say things about the world, can never owe their truth values wholly to convention, implicit or otherwise.8 (Sen- 6 Quine’s own commitment to that claim seems to be grounded in his behaviorism—the crucial premise, for him, is that any explanation of our behavior with a given sentence in terms of our accepting it as a matter of convention can be replaced without loss by an explanation in terms of our taking it to be obvious—but few contemporary theorists are at all sympathetic to behaviorism. 7 Furthermore, if there’s a solution to be found, it’s going to be available to conventionalists just as much as to anyone else. Paul Horwich (1998a), for example, is no conventionalist, but he’s committed to there being a distinction between those dispositions that are meaning-constituting and those that aren’t. His account of this distinction is that the meaning-constituting dispositions for a given word are the ones that are explanatorily fundamental, in the sense that they can explain all other dispositions to use that word. If something like this is right, then Quine’s critique presents no problem, for conventionalists or anyone else. 8 Some theorists state explicitly that Quine’s critique doesn’t get to the heart of what’s wrong with conven- tionalism. Ted Sider (2011: 100), for instance, says that “Quine’s argument does not go far enough” because it doesn’t “challenge the very idea of something’s being ‘true by convention”’—after all, Quine concedes that it is possible, by explicit stipulation, to make finitely many sentences true. Elliott Sober (2000) and Paul Benacerraf 26 tences about our linguistic conventions are, of course, exceptions. For instance, the truth value of the sentence “It’s a convention among English speakers to be willing to apply the predicate vixen to any object to which one is willing to apply both female and fox” surely is, in some sense, a matter of what conventions are in place. But we can set such sentences aside—interesting versions of conventionalism pertain not to claims about how language is used but to object-level sentences such as “All vixens are foxes”, “Unicorns either exist or do not exist”, or “1 + 1 = 2”.) Any sentence S, so the argument goes, owes its truth value in part to linguistic convention and in part to what the world is like: what it is for S to be true is just for S to express some true proposition p, and while it is a matter of convention that S expresses p—what linguistic conventions can do is fix the meanings of our expressions—it’s not a matter of convention whether p is itself true. After all, p is a proposition about the world, which means the truth value of p is going to depend on what the world is like. The truth value of S, then, is also going to depend on what the world is like. So no sentence can be true by convention alone—“The world”, as Sider (2011: 101) puts it, “must also cooperate”. Take, for instance, the sentence “All vixens are foxes”. It’s entirely obvious that this sen- tence says something about the world. In particular, it says something about vixens—namely, that they’re foxes. So if this sentence is true by convention, then what it says must be true by convention, which means that our linguistic conventions somehow have the power to make it the case that vixens are in fact foxes, to make the world one way rather than another. But that seems absurd: the conventions of English do make it the case that “All vixens are foxes” says that all vixens are foxes, but it seems clear that whether all vixens are foxes is in no way a matter of the conventions of English or any other language. It’s just a matter of what vixens are like. And if that’s right, then “All vixens are foxes” can’t owe its truth wholly to convention. Call this the objection from worldly fact. It’s the standard argument against conventional- ism, and it’s generally taken to be decisive.9 Strikingly, though, the objection is rarely laid out (1973) express similar sentiments. 9 The objection (or something very much like it) has been endorsed in print by Sider (2011), Gillian Rus- 27 in very much detail. What usually happens instead is that the objection is briefly sketched in much the way I’ve just sketched it above, and this sketch is taken to be sufficient to show that conventionalism is hopeless. What’s more, when detailed argumentation is offered, it often fails to target any genuine conventionalist commitment. Lewy, for example, thinks conventionalists would endorse the following (absurd) thesis: that, necessarily, if (e.g.) vixen has the same meaning as male fox, then all vixens are male foxes.10 So he provides a series of careful arguments against this thesis. But those arguments seem irrelevant, for actual conventionalists don’t in fact endorse the thesis. Nor, as Wright (1985) points out, are they committed to it merely in virtue of their conventionalism, despite what’s sometimes thought.11 And if that’s right, then Lewy’s critique simply misses its intended target. The blame for misunderstandings of this sort, though, doesn’t lie entirely with the oppo- nents of conventionalism. After all, the claim that a sentence is “made true by convention” or “true in virtue of convention” or that it “owes its truth wholly to convention”, though sug- gestive, is hardly precise, and conventionalists’ attempts to clarify their doctrine are often unhelpful. For example, Lewy’s primary target, John Wisdom, represents his view as one on which the object-level proposition that (e.g.) a thing is a vixen just in case it’s a female fox “makes the same factual claims” as the metalinguistic proposition that vixen has the same meaning as female fox (see his 1938: 462–463). But unless significant further clarification sell (2008), Williamson (2007), Bob Hale (2002), Sober (2000), Laurence BonJour (1998), Horwich (1998a), Boghossian (1996), Stephen Yablo (1992), Casimir Lewy (1976), Benacerraf (1973), David Lewis (1969), Arthur Pap (1958), W. C. Kneale (1947), C. I. Lewis (1946), and A. C. Ewing (1940), among others. Note, too, that neo-Fregeanism about arithmetic (see, e.g., Crispin Wright 1983), Stephen Schiffer’s (2003) theory of pleonastic entities, and other flavors of what Amie Thomasson (2015) calls “easy ontology” are often thought to be vulnerable to an objection of the same sort: that such approaches are workable only on the absurd view that we have the power to conjure objects into existence via our choice of language. Versions of this latter objec- tion have been endorsed by Peter van Inwagen (2016), Karen Bennett (2009), David Chalmers (2009), George Boolos (1997), and Hartry Field (1984), among others. 10 I say the thesis is absurd because the consequent comes out false in any situation where there are vixens— after all, it’s a necessary truth that all vixens are female—and the antecedent comes out true in any situation where the conventions of our language are set up in such a way that vixen has the same meaning as male fox. So any situation that meets both of these conditions is a situation in which the conditional is false. And such a situation is obviously possible, so the conditional isn’t a necessary truth. 11 I discuss this matter in some detail in §5 below. 28 is offered, this is misleading at best: for one thing, on any remotely intuitive understanding of what it is to make a factual claim, the metalinguistic proposition makes a factual claim about the meanings of English words that the object-level proposition doesn’t make. As a result, it’s difficult to see just what Wisdom is trying to say here. Lewy’s confusion is understandable. This is all to say that there’s reason to be dissatisfied with the state of the debate over truth by convention: conventionalists, by and large, have failed to precisely formulate their view, and their opponents have in turn dismissed that view by appeal to an objection that is itself not precisely formulated (and whose precisifications seem to be based on misunderstandings of what the view entails). My aim, then, is to do what I can to clarify the situation. In particular, what I’m going to do is produce as precise (and sympathetic) a reconstruction of the objection from worldly fact as I can, with a view to working out exactly what premises it depends on and whether conventionalists are in fact committed to those premises. My contention, just so that it’s clear where we’re headed, is that things shake out as follows. Though the objection from worldly fact is not as straightforward as its proponents have taken it to be, this much is correct: conventionalists, despite what some of them have said, are committed to the thesis that, in a sense, linguistic conventions have the power to make the world one way rather than another. Whether (e.g.) all vixens are foxes does turn out to be a matter of convention. And this thesis is unorthodox, to be sure. But, properly understood, it isn’t obviously absurd, for reasons I’ll explain. And if that’s right, then conventionalism remains worthy of investigation. This is what I’ll be arguing in this chapter. But first things first: in order to competently evaluate conventionalism’s prospects in the face of the objection from worldly fact, we’ve got to have some grip on what conventionalists are and are not committed to. So I begin with an attempt to say something clarifying about how conventionalism is to be understood. 29 2 What conventionalism is and what it needs to be As I suggested in my discussion of Wisdom and Lewy, the precise content of the convention- alist doctrine is elusive, even for conventionalists themselves. A. J. Ayer, for example, tries to elucidate his conventionalism about analytic sentences by claiming variously that they • “make no statement whose truth can be accepted or denied” but “merely lay down a rule which can be followed or disobeyed” (1936b: 20);12 or that they • “are entirely devoid of factual content”, since they don’t “provide any information about any matter of fact” (1936a: 104);13 or that they • are “not…about ‘things’ at all, but simply about words” (1936a: 64).14 These formulations, though, are all unfortunate, and not just because they’re (at least seem- ingly) incompatible with one another. The third is similar to Wisdom’s claim, and problem- atic for similar reasons: what the sentence “All vixens are foxes” says is just that all vixens are foxes, and this claim, on any remotely intuitive understanding of what it is for a claim to be about something, is about vixens, not words. And the first and second, if taken literally, re- spectively entail that it’s not true that all vixens are foxes and that it’s not a fact that all vixens are foxes. If conventionalists were really committed to these absurdities, conventionalism would be an easy doctrine to dismiss. But conventionalists need not endorse any of these claims. They can say instead that, while what Ayer says is literally false—it’s (of course) true that all vixens are foxes, and this is (of course) a fact about vixens—what he’s trying to give voice to is just the idea that, in some sense to be specified, sentences like “All vixens are foxes” don’t say anything substantive about what the world is like. All he’s really committed to, then, is that, although the sentence “All 12 He later decides that this view is untenable; see his 1946: 17. 13 See also, e.g., Norman Malcolm 1940: 200 and Glock 2008: §2. 14 See also, e.g., Carnap 1934/1937: §74. Note, too, that Ayer later disavows any view on which this claim is taken literally, saying that, although he “sometimes seem[s] to imply that” analytic sentences “describe the way in which certain symbols are used”, he neither “wish[es] to hold” that position nor “think[s] that [he is] committed to it” (1946: 16). 30 vixens are foxes” may indeed be true, it doesn’t say anything substantive—there’s something degenerate about it. Unsatisfactory formulations aside, this idea, vague though it is, does seem to be what Ayer and his fellow conventionalists are struggling to articulate. And it’s an intuitively ap- pealing idea, which may explain why, despite the fact that few contemporary theorists actu- ally accept conventionalism, certain sorts of sentences (namely, logical and analytic truths) are very commonly characterized as “empty”, “trivial”, “true by definition”, and the like.15 But it is, again, just a vague idea. The trick is turning it into a precise doctrine. Full disclosure: I won’t here be even attempting to pull off this trick. (For what it’s worth, I have my doubts about whether it’s even possible to do so. If there were some precise, accurate way of characterizing conventionalism in one or two sentences, surely some conventionalist would have discovered it by now.) That said, I do want to try to shed some light on what it is for a true sentence to fail to say anything substantive, on what’s supposed to be degenerate about (e.g.) the sentence “All vixens are foxes”—any clarity we can gain here is going to bring us closer to understanding just what’s distinctive in the conventionalist position. So: what is it that’s supposed to be lacking about “All vixens are foxes”, as compared to (e.g.) the sentence “All vixens weigh less than a ton”? Just what properties does the latter sentence have that the former one doesn’t? Well, for one thing, it’s clear enough that what the latter sentence says, unlike what the former says, is contingent and empirical, and so it may be tempting to try to characterize substantiveness in terms of one or both of these properties. It turns out, though, that no such characterization can be adequate.16 Briefly considering why this is so will help to illustrate some of what’s distinctive about conventionalism. 15 For more on contemporary theorists’ failure to purge their thinking of conventionalist metaphor, see Sider 2011: §6.5. 16 In a way, this is obvious: even opponents of conventionalism will tend to agree, after all, that the sentence “All vixens are foxes” doesn’t say anything contingent or empirical. So, if the thesis that some true sentences don’t say anything substantive is to be a thesis distinctive of conventionalism, then a proposition’s failing to be substantive can’t merely be a matter of its lacking these features. But to notice this isn’t yet to understand why conventionalists need a different characterization of substantiveness. 31 We can see what’s wrong with such characterizations by keeping in mind the epistemo- logical purpose of conventionalism. Conventionalists’ primary goal, recall, is to dissolve a puzzle, to explain how we manage to reliably believe truths in domains our access to which would otherwise be, by naturalist lights, hopelessly mysterious. So, as we try to understand what conventionalism comes to, we can immediately reject as inadequate any interpretation on which the doctrine doesn’t offer at least some reasonable hope of dissolving this puzzle. In other words, in order for conventionalism to serve its purpose, the following constraint must be met: Naturalist-Friendliness. An adequate answer to our epistemological puzzle in a given domain must give us the resources to explain—without appeal to any facts our nonobserva- tional access to which remains mysterious by naturalist lights—how we manage to reliably believe truths in that domain. If we want to have any chance of being accurate to conventionalists’ intentions, then, we must construe conventionalism in such a way that it has some hope of meeting Naturalist- Friendliness. So, if conventionalism is just the thesis that certain sentences don’t say any- thing substantive, we must construe that thesis in such a way that it has some hope of meet- ing Naturalist-Friendliness. And that means we must understand substantiveness in such a way that a true proposition’s failing to be substantive has some role to play in a naturalist- friendly explanation of our access to the truth of that proposition. And it turns out that, if a proposition’s being substantive is understood just in terms of its being contingent or empirical, a proposition’s not being substantive can play no role in such an explanation. It’s clear enough why a characterization of a proposition’s being substantive in terms of its being empirical can’t meet Naturalist-Friendliness. To say that a proposition is empirical, after all, is just to say that observational evidence is relevant to our evaluations of its truth or falsity. So, when we say that the proposition that all vixens are foxes isn’t empirical, all we’re saying is that any access we have to its truth value is nonobservational. But this is just to describe the problem conventionalists are trying to solve—what conventionalists need 32 to explain is how we manage to reliably form true beliefs in the relevant domains without observing the world. To say merely that the propositions in the domains in question aren’t empirical is to leave this explanation unbegun. As for contingency: though conventionalists have sometimes been tempted to cast their view in modal terms,17 reflection shows that a view on which substantiveness is character- ized in terms of contingency can meet Naturalist-Friendliness only if we already have an explanation of our ability to track the truth about modality. That is, if our explanation of our access to the truth of the proposition that all vixens are foxes proceeds by appeal to the claim that it’s impossible for vixens to fail to be foxes, this explanation can’t be adequate by naturalist standards unless we can already account, in a naturalist-friendly way, for our abil- ity to reliably form true beliefs about which states of affairs are possible. And in that case, our explanation of how we manage to be so reliable about modality is where the real episte- mological action is. To say merely that propositions that fail to be substantive are necessary, then, doesn’t eliminate any of the mystery here. It simply moves the bump in the proverbial carpet. The general lesson here is just that, as we try to make sense of the conventionalist claim that certain true sentences say nothing substantive, we can rule out as unsatisfactory all interpretations except those on which it would make sense, given the Naturalist-Friendliness constraint, for conventionalists to make that claim the basis of their view. Or, in other words: as we try to nail down what the conventionalist doctrine is, we can focus our investigation by thinking about what that doctrine would need to be in order for conventionalism to serve its epistemological purpose. What these epistemological considerations suggest is that a true sentence that says noth- ing substantive must, for conventionalists, be one whose truth can in some sense be fully explained by appeal only to facts our access to which is not mysterious by naturalist lights. 17 For instance, some members of the Vienna Circle, taking inspiration from Wittgenstein’s Tractatus Logico- Philosophicus (1922), tried to explain our knowledge of the truths of logic and mathematics by claiming that those truths are tautologous, in the sense that (as Carnap puts it in his intellectual autobiography) they “hold necessarily in every possible case” (1963: 46). For more on this, see Warren Goldfarb 1996: §2. 33 And it’s obvious, I take it, that the facts that are going to be relevant, according to conven- tionalists, are facts about our linguistic conventions.18 For conventionalists, then, a sentence that’s true by convention must be one whose truth is fully explicable just by appeal to facts about our linguistic conventions. And if that’s right, then perhaps conventionalism is, as Warren (2015b: 88) suggests, best understood as a doctrine about explanation: “What is dis- tinctive of conventionalism about some branch of discourse D is that linguistic conventions are supposed to fully explain the truth of sentences in D”. This is still not fully precise, of course—the notion of explanation is itself notoriously difficult to pin down. But we’re making some progress. And furthermore, the language of explanation makes additional progress possible: even without taking a stand as to the cor- rect theory of explanation, we can say with some confidence that there are certain sorts of explanation that conventionalists don’t have in mind. First, the sort of explanation rel- evant to conventionalism must be nonpragmatic, in the sense that what counts as a good explanation isn’t dependent on any thinker’s background beliefs, interests, etc.—otherwise, conventionalism might turn out to be true relative to some background beliefs and false rel- ative to others, in which case it couldn’t play the foundational epistemological role required of it. And second, it’s reasonably clear that the relevant sort of explanation is not causal explanation—the relationship between our linguistic conventions and the truth of certain sentences doesn’t seem to have the right features to be causal. (For instance, effects are gen- erally taken to come after their causes,19 but the conventionalist view certainly isn’t that we adopt some conventions and then, at some later time, certain sentences come to be true.) 18 There are, of course, questions to be answered here about just how it is that we have access to facts about our conventions. But there are also naturalist-friendly ways of trying to answer those questions. Here’s an example (albeit one that relies on an implausible toy theory of what it is for a convention to be in place): Suppose the truth of of one of my sentences is fully explained by a private convention, where all it is for that convention to be in place is for me to be disposed to accept the sentence no matter what I observe. Then, as long as the relevant disposition doesn’t change, the convention remains in place, in which case there’s no way for me to be incorrect about whether the sentence is true. The fact that I have a true belief here, then, is no mystery. (Note that I have access to the fact that the convention is in place in only a fairly minimal—and naturalist- friendly—sense: whether I accept the sentence as true is sensitive to whether the convention is in place, since the convention’s being in place is just constituted by my disposition to accept the sentence.) 19 Though this, like most claims about causation, is controversial. 34 Conventionalism in a given domain, then, can be understood as the doctrine that our linguistic conventions fully explain, in some nonpragmatic and noncausal way, the truth of sentences in that domain. But given this restriction to nonpragmatic and noncausal expla- nation, we might be inclined to wonder just what sort of explanation conventionalists do have in mind. Most of the usual suspects have been ruled out, after all; what else is left? As it turns out, there’s a family of nonpragmatic, noncausal relations of explanatory de- pendence that have been the object of much philosophical attention in recent years: ground- ing relations. Facts about grounding are generally taken to be facts about what’s explanato- rily prior to what, in some metaphysically robust but noncausal sense—as Kit Fine (2001: 15) puts it, “If the truth that P is grounded in other truths, then they account for its truth; P ’s being the case holds in virtue of the other truths’ being the case”. And this sounds a lot like what conventionalists are after. In fact, on certain views, to be a grounding relation just is to be the kind of relation that figures in noncausal explanations. Paul Audi’s (2012: 688) view, for instance, is like this: for him, all it is for something to be a grounding relation is for it to be a “noncausal relation of determination” whose relata are facts. (His argument for the existence of a grounding relation is essentially just that one fact can explain another without being a cause of it— i.e., that one fact can determine another in the way required for the former to explain the latter even if the two facts aren’t causally related.) And if such a view is correct—if noncausal determination is all that’s required for one fact to ground another—then conventionalism certainly is a doctrine about what grounds what. That said, if conventionalism is a doctrine about what grounds what, then, for conven- tionalism to have any hope of doing the epistemological work it’s intended to do, there must be something deeply wrong with standard theories of the nature of grounding. On standard theories, after all, facts about what grounds what are facts about the metaphysical struc- ture of reality,20 which means that, if conventionalism is a view about what the grounding 20 See, e.g., Fine 2001 and Jonathan Schaffer 2009. 35 facts are, then conventionalists’ explanation of how we can access the truth about whether (e.g.) all vixens are foxes requires appeal to facts about the structure of reality. But in that case, conventionalism can meet the Naturalist-Friendliness constraint only if there’s some naturalist-friendly explanation available of our nonobservational access to facts about the structure of reality. And since no such explanation is available, standard theories of ground- ing entail that conventionalism, interpreted as a doctrine about what grounds what, can’t do the epistemological work it’s intended to do.21 Conventionalists, then, must do one of the following two things: deny that conventional- ism is a view about grounding (and so insist that not all noncausal determination relations between facts are grounding relations) or provide some argument showing that standard views of the nature of grounding are incorrect. And we’d need to do quite a bit more work on the nature of grounding to determine which of these tacks conventionalists should take. So, for this chapter, at least, I’ll leave this question to the side—I won’t describe the explana- tory relations conventionalists are interested in as grounding relations, but I also won’t reject outright the claim that these relations are grounding relations.22 In any case, though, these relations—whether they’re grounding relations or not—are going to be quite formally similar to grounding relations. After all, even if the two families of relations are nonidentical, both are families of noncausal, nonpragmatic relations of ex- planatory dependence. And in the recent literature on grounding, the formal structure of 21 Truthmaking (which may or may not be definable in terms of grounding; see, e.g., Fabrice Correia and Benjamin Schnieder 2012: §4.4) is a similarly problematic relation: it looks superficially as though convention- alism can be construed as the thesis that our linguistic conventions are the truthmakers of certain sentences, but inquiry into truthmakers, like inquiry into grounds, is, on standard views, the province of speculative metaphysics. 22 It’s worth noting that there may be prudential reason to adopt this strategy even if conventionalism is cor- rectly characterized as a doctrine about the grounding facts. In that case, after all, it must be that the metaphys- ical structure relevant to questions of ground is not, as standard theories would have it, something out there, independent of us, that we must discover, but is instead something we somehow impose on the world—this is the only way for conventionalism, construed as a thesis about what grounds what, to do the epistemologi- cal work it’s intended to do. But even if such a radical view about the nature of grounding is correct, the fact remains that, on standard views, the relevant metaphysical structure isn’t the sort of thing we impose on the world—it’s something more inflationary. And it’s almost impossible to invoke the notion of grounding without calling to mind these standard views. To characterize conventionalism as a thesis about the grounding facts, then, is to risk being confusing or misleading even if correct. 36 the various grounding relations has been explored in some detail (in, e.g., Gideon Rosen 2010, Audi 2012, and Fine 2012a, 2012b). So our investigation into conventionalism will inevitably contain echoes of this literature—again, the formal structure of the explanatory relations conventionalists are interested in is going to mirror the formal structure of the grounding relations, even if it turns out that these two kinds of relation are distinct.23 Summing up: by thinking about what epistemological work conventionalism is intended to do, we’ve arrived at the conclusion that conventionalism (in a given domain) is to be interpreted as the doctrine that the truth of sentences (in that domain) is fully explained, in some nonpragmatic, noncausal way, by our linguistic conventions (where this explanatory relation may or may not be a grounding relation). This, I recognize, is unsatisfying—we haven’t said exactly what sort of explanation is in play here, and so we haven’t made the conventionalist doctrine fully precise. But as I said, I’m not even attempting here to make that doctrine fully precise. I’m just attempting to provide some clarity as to what’s distinctive about conventionalism. And at least this much, I think, has been done. (I take it that the way to gain further clarity here is to generate a full metasemantic theory that can dissolve the epistemological puzzle motivating conventionalism and then to think about what sorts of explanatory claims are entailed by that theory. I return to the project of outlining such a theory in Chapter 3 below.) It will be useful, before we move on to our discussion of the objection from worldly fact, to emphasize one other feature of the epistemological problem conventionalism is intended to solve. The point of conventionalism, once again, is to explain how we manage to reliably form true beliefs in certain domains to which we don’t have observational access. But what needs to be explained is not, or at least is not only, the reliability of our metalinguistic beliefs about the truth values of sentences. It’s not enough to explain our our ability to reliably form true beliefs about whether sentences such as “All vixens are foxes” are true; after all, we might believe that the relevant sentences are true despite having no idea what they say. (I 23 In what follows, I’ll often point out where my claims about the logic of conventionalists’ explanatory rela- tions are structurally analogous or disanalogous to Fine’s claims about the logic of ground. 37 might, for instance, believe that the Spanish sentence “Ningún soltero es un hombre casado” is true without knowing what it says, just on the basis of the testimony of a trustworthy bilingual friend.) What we’re really interested in is our ability to reliably form true beliefs about whether (e.g.) all vixens are foxes. That is, for conventionalism (or any rival view) to serve its purpose, it must meet the following requirement: Object-Level Belief. An adequate answer to our epistemological puzzle in a given do- main must give us the resources to explain our ability to reliably form true object-level beliefs—not just true metalinguistic beliefs—in that domain. Otherwise, our solution to the epistemological problem motivating conventionalism will be incomplete at best. Both Object-Level Belief and Naturalist-Friendliness will have roles to play in the below discussion of the objection from worldly fact. For now, though, I want to end this section with a brief demonstration of how these constraints can be used to show that conventional- ism is well motivated in the first place. Some theorists have suggested that there’s no reason to endorse conventionalism: it’s possible (they say) to provide a different sort of analytic theory of the a priori, one that doesn’t rely on anything as radical as truth by convention. In particular, they suggest that, though no sentences are true by convention, we can fix the meanings of as-yet-undefined expressions by stipulating the truth of certain sentences containing those expressions (or by otherwise accepting those sentences as true as a matter of convention; explicit stipulation isn’t what’s important here). The story is roughly as follows: when we stipulate that certain sentences containing some expression E are to count as true (or when we accept them as true as a matter of convention), we thereby pick out among all the possible meanings for E (where these possible meanings exist independently of our stipulations) whatever one it needs to have for those sentences to be true.24 On a view of this sort, the sentences in question aren’t true by convention alone—they turn out to be true only on the condition that the world is 24 Proponents of views of this sort include Boghossian (1996) and Sider (2011). 38 able to provide us with some appropriate meaning for E. If the world doesn’t cooperate at least to this extent, then we’ll have failed to fix any meaning for E at all, despite our best efforts.25 Nevertheless, there’s some temptation to insist here that our ability to reliably form true beliefs in the relevant domains has been explained, since—when we do successfully fix a meaning for E—we believe, correctly, that the relevant sentences are true.26 And if that’s right, then at least some of naturalists’ worries go away, despite the fact that views like this are far less radical than conventionalism is. Our question, then, is whether such views can completely dissolve the epistemological puzzle motivating conventionalism (and thereby render such a radical doctrine unnecessary). It turns out that they can’t—no view like the one under discussion can meet both the Object-Level Belief constraint and the Naturalist-Friendliness constraint at the same time. After all, even when we correctly believe that we’ve successfully given E a meaning, what this tells us immediately is just that the relevant sentences are true. Suppose, for instance, that we’ve fixed the meaning of vixen by stipulating the truth of the sentence “All vixens are foxes”. Then, given that our stipulation is successful, we can be sure that this sentence is true— we’ve stipulated as much. But this isn’t enough. If the view here is to meet Object-Level Belief, it needs to give us the resources to explain our ability to get from this correct metalinguistic belief to the correct object-level belief that all vixens are foxes. And to do that, it needs to give us the resources to explain how we come to understand the sentence “All vixens are foxes”. This is where the problem arises: To understand a sentence is to know its meaning, and on 25 There are several ways for the world to be uncooperative here. First, it may be that there aren’t any such things as meanings, as Quine (see, e.g., his 1968) argues. Second, it may be that there are several different meanings that can make the relevant sentences true, in which case no single meaning is picked out. And third, it may be that, though there are meanings, the world is arranged in such as way as to make it impossible for any of them to make the relevant sentences true. For instance, there seems to be a problem with trying to fix the meaning of conbachelor by stipulating the truth of the sentence “All conbachelors are unmarried and it’s not the case that all conbachelors are unmarried”—there’s just no meaning available for conbachelor that can make this sentence true. (For a detailed critical discussion of views of this sort in the light of these possibilities, see Horwich 1997 (or the revised version that appears as Horwich 1998a: chap. 6).) 26 For what it’s worth, I take this claim to be incorrect. What needs explanation, again, is our near-perfect reliability in the relevant domains, and in order to explain that, we need to explain our ability to be nearly perfectly reliable about when a meaning has successfully been given to E. And it’s not clear, on a view like this, that we have any way of explaining the latter ability. But we can set this worry aside for the sake of argument. 39 the view under discussion, meanings are things that exist independently of our stipulations. So we need an explanation of our ability to reliably form true beliefs about the meanings of sentences whose truth we’ve stipulated—of how we come to correctly believe (e.g.) that, of all the preexisting meanings that are available, m is the one that’s attached to the sentence “All vixens are foxes”. And the view itself doesn’t give us the resources to explain this ability— the view itself, in fact, doesn’t give us the resources to explain how we can reliably form true beliefs about what meanings are available, how many meanings there are, or even whether there are any meanings at all.27 And that means that, when we appeal to our understanding of “All vixens are foxes” to explain how we come to correctly believe that all vixens are foxes, we appeal to a fact our access to which we have no way of explaining. So the view under discussion, in order to meet Object-Level Belief, must violate Naturalist-Friendliness.28 Our epistemological puzzle, then, does provide genuine motivation for conventional- ism—views that are less radical, such as the sort of view we’ve been discussing, are unable to do the explanatory work that needs to be done. So it’s important, if we’re interested in the prospects for dissolving our puzzle, to determine whether conventionalism remains tenable in the face of the objection from worldly fact. And to do that, we need a better understanding of how that objection works. 3 Reconstructing the objection from worldly fact Recall: opponents of conventionalism are willing to allow that our sentences owe their truth values partly to our linguistic conventions; what they deny is the (much stronger) conven- tionalist doctrine that there are (ordinary, object-level) sentences that owe their truth values wholly to our conventions. Or, in the language of explanation: while opponents of conven- tionalism grant that facts about what linguistic conventions are in place play some role in 27 Hale and Wright (2000: 294) call this “the understanding problem” and take it to be a conclusive objection to the sort of view under discussion here. 28 For an interesting exchange regarding issues of this sort, see Boghossian’s treatment of implicit definition in his 1996, Kathrin Glüer’s objection to Boghossian in her 2003, Boghossian’s response to Glüer in his 2003b, and Carrie Jenkins’s reply to Boghossian in her 2008. 40 explaining the truth (and falsity) of our sentences, they insist that, nevertheless, there’s no sentence whose truth (or falsity) is fully explained by these facts about convention. This is the conclusion the objection from worldly fact is intended to deliver. In this section I recon- struct that objection. We can start by stating the intended conclusion a bit more precisely, as follows: there are no facts about what linguistic conventions are in place such that the truth of S (where S is any ordinary, object-level sentence) is fully explained by those facts. Or again, where C is the set of all facts about convention: (C) ∀Γ(Γ ⊆ C → ¬(S is true purely in virtue of Γ)) (Terminological note: when some facts play a role in explaining another fact, I’ll say that the latter fact obtains (at least) partly in virtue of the former ones, and when some facts fully explain another fact, I’ll say that the latter fact obtains purely in virtue of the former ones.29 ) This conclusion, of course, is intended to apply to any (ordinary, object-level) sentence whatsoever. But in reconstructing the argument, it will be helpful, for concreteness, to con- centrate on a particular sentence. So let S be “All vixens are foxes”. The argument for (C), then, proceeds by appeal to the following claim: whether all vixens are foxes isn’t a matter of the conventions of English or any other language. That is, the fact that all vixens are foxes is not itself a fact about what linguistic conventions are in place, nor can it be fully explained by any such facts.30 Or, a bit more formally, 29 I want to note that these in-virtue-of relations, whether identical to grounding relations or not, are at the very least formally similar to grounding relations. Their logic, on my view, resembles (but isn’t identical to) the logic of the relations of strict partial ground and strict full ground, respectively, in Fine’s (2012a, 2012b) pure logic of ground. For further discussion of the logic of the in-virtue-of relations, see §§a1–a2 of this chapter’s formal appendix. 30 Here I’m casting the objection in terms of the fact that all vixens are foxes, but it could just as easily be cast in terms of (the truth of) the proposition that all vixens are foxes. (And in fact, it usually is cast in terms of the proposition.) I’ve chosen to appeal to the fact rather than to the proposition simply because talk of facts, unlike talk of propositions, is common in nonphilosophical contexts and so can’t be dismissed as mere philosophical extravagance. As I suggested in §2, conventionalism would be an easy doctrine to dismiss if it required us to deny that it’s a fact that all vixens are foxes. All that said, the objection, I take it, is in the end just as strong when cast in terms of the proposition as it is when cast in terms of the fact. After all, skepticism about the existence of abstracta such as propositions seems unwarranted from a conventionalist perspective—one of the points of conventionalism, as I suggested in §1, is 41 (1) all vixens’ being foxes ∉ C (2) ∀Γ(Γ ⊆ C → ¬(all vixens are foxes purely in virtue of Γ)) And it’s clear enough that (1), at least, is true—after all, the fact that all vixens are foxes is a fact about how things are with vixens, not a fact about how things are with us. So it isn’t itself a fact about what conventions are in place in our linguistic community. The substantive premise here, then, is (2), the claim that the fact that all vixens are foxes doesn’t obtain purely in virtue of facts about convention. Proponents of the objection from worldly fact tend to be explicit about their commitment to this claim.31 And it’s easy to see why: prima facie, it’s exceedingly plausible that facts about parts of the world outside our- selves (including the fact that all vixens are foxes) do not depend entirely on us. To think otherwise would, it seems, be to embrace idealism.32 Now: neither (1) nor (2) tells us anything at all about S. So a third premise is needed, one that says what the relationship is between the truth of S and the fact that all vixens are foxes. And while proponents of the objection from worldly fact don’t tend to be explicit here, it’s fairly clear that they’re committed to the standard view of this relationship, according to which the fact plays a role in explaining the truth of the sentence.33 That is, they’re committed to the following: (3) S is true partly in virtue of all vixens’ being foxes So we can include this as our third premise. to deflate claims about abstracta and so to give naturalists a way of vindicating our claims to knowledge in ab- stract domains such as mathematics. (Recall that Carnap’s (1950/1956) attempt to allay skeptical worries about abstracta includes discussion of propositions as well as of numbers and classes.) Conventionalists shouldn’t be in the business of rejecting out of hand abstracta that may prove theoretically useful. 31 In fact, they tend to commit themselves to the even stronger claim that the fact that all vixens are foxes doesn’t obtain even partly in virtue of facts about convention (though, again, they tend to state their view in terms of the proposition rather than the fact). Yablo (1992: 878), for instance, says that whether a given proposition is true is a question “to which the rules of usage are quite irrelevant”, and Sober (2000: 247) says, similarly, that “the proposition expressed by [an English-language analytic sentence] does not depend for its truth on how English works”. But the objection from worldly fact doesn’t require this stronger claim. 32 But stay tuned. 33 To take one example, the following rhetorical question, which Boghossian (1996: 364) asks in the course of sketching his version of the objection from worldly fact, presupposes that a sentence has its truth value at least partly in virtue of a corresponding fact: “How could the mere fact that S means that p make it the case that S is true? Doesn’t it also have to be the case that p?” 42 The intended argument, then, seems to go as follows. Let Γ be some set of facts that, taken together, fully explain the truth of S. That is, let Γ be any set of facts such that S is true purely in virtue of Γ. By (3), we know that the fact that all vixens are foxes—call it v—has some role to play in explaining the truth of S. So it’s plausible that any full explanation of the truth of S will appeal either to v itself or to some other facts that can themselves explain v. That is, Γ must contain either v or some facts that, taken together, fully explain v. And in either case, Γ contains at least one fact not about convention: by (1), v itself isn’t a fact about convention, and by (2), any set of facts that can fully explain v must contain at least one fact not about convention. So any full explanation of the truth of S will appeal to at least one fact not about convention, which is just to say that (C) is true. Most of the inferences in the above argument rely only on basic logic, but there’s one exception: the inference from (3)—i.e., the claim that S is true at least partly in virtue of v, the fact that all vixens are foxes—to the claim that any full explanation of the truth of S will appeal either to v itself or to some other facts that can fully explain v. So it’s worth taking a moment to think about whether there’s reason to accept this inference. It turns out that there is. Given certain plausible assumptions about the logic of the in- virtue-of relations—assumptions to which conventionalists and their opponents are both committed—the inference is a good one. Here I give a rough explanation of why that is.34 The first assumption needed is this: a fact plays a role in explaining another fact just in case there’s some full explanation in which it plays a role. That is, a fact f obtains partly in virtue of a fact g just in case, for some Γ such that f obtains purely in virtue of Γ, g ∈ Γ. And this assumption seems trivial—to have a role to play in explaining some fact is surely just to have a role to play in some full explanation of that fact.35 The second assumption, though, is more complicated. To see what it is, we need to note, 34 Again, the logic of the in-virtue-of relations is described in much more detail in the below appendix (see especially §a2). 35 Indeed, in developing his notions of full ground and partial ground, Fine (2012a: 50) defines partial ground in terms of full ground using just such a biconditional. I don’t define the partly-in-virtue-of relation in terms of the purely-in-virtue-of relation, but the biconditional does turn out to hold in the logical system I develop—see my proof of the Full Explanation Thesis in §a3 of this chapter’s appendix. 43 first of all, that the facts relevant to explaining a given fact are going to form an explanatory hierarchy. To oversimplify a bit: certain facts directly explain the target fact, certain other facts directly explain those facts and thereby derivatively explain the target fact, and so on. So we can give different (full) explanations of the target fact by appealing to facts at different levels in this hierarchy.36 Suppose, for instance, that the sky is colored purely in virtue of its being blue and that it’s blue purely in virtue of its being cerulean. Then it may also be that the sky is colored purely in virtue of its being cerulean—it may be that the fact that the sky is cerulean, by explaining the fact that it’s blue, thereby also explains the fact that it’s colored.37 Or, more generally: since we can move up and down the explanatory hierarchy, it’s possible for there to be two distinct sets of facts Γ and ∆ such that some fact obtains purely in virtue of Γ and also obtains purely in virtue of ∆. Our second assumption, then, can be stated roughly as follows: in cases where there are two distinct such sets of facts, there can be no differences between Γ and ∆ except those that result from moving up and down the explanatory hierarchy. That is, there’s, in a sense, only one explanatory story to be told about why a given fact obtains; there’s no explanatory overdetermination. This assumption, I think, is at least somewhat plausible. 38 Just as important as its plau- sibility, though, is the fact that conventionalists and their opponents both accept it, at least tacitly. The influence of the objection from worldly fact makes this clear: after all, if explana- tory overdetermination were possible, then it would be possible for a sentence to be true 36 One point of clarification about the notion of full explanation that’s in play here: for an explanation to count as full in the relevant sense, it’s not required that the explanatory story be traced all the way back to fundamental facts (if there are such things). On conventionalism, remember, the truth of certain sentences is fully explained by facts about what conventions are in place. But conventionalists certainly don’t endorse the (absurd) claim that facts about convention are fundamental—it’s abundantly clear that those facts are to be explained by facts about the linguistic dispositions, brain states, etc., of members of the linguistic community. (Presumably, conventionalists will also want to say that these facts about dispositions, etc., can themselves fully explain—though only derivatively—the truth of the relevant sentences.) 37 In fact, on the assumption that the purely-in-virtue-of relation is transitive—a plausible assumption, and one that holds in the logical system I develop in this chapter’s appendix, as I show in §a2—it turns out to be certainly true that the fact that the sky is cerulean fully explains the fact that it’s colored. 38 It holds in the logical system I develop—see my proof of the No Overdetermination Thesis in §a3 of this chapter’s appendix. (This is one place where the logic of the in-virtue-of relations under discussion here differs from Fine’s pure logic of ground.) 44 purely in virtue of convention and to also be true in virtue of facts that are entirely unrelated to convention. In this case, the objection from worldly fact would would be based on an obvious mistake—the objection requires us, after all, to conclude, on the basis of the claim that a sentence is true in virtue of nonconventional facts, that the sentence is not true purely in virtue of convention. And this is easy to spot even on cursory examination. Since no one has brought up this point, it seems clear that all parties to the discussion are committed to the assumption that explanatory overdetermination isn’t possible. It still remains to show that the two assumptions I’ve just described can justify the infer- ence in question. We begin, recall, with (3), according to which the truth of S is to be ex- plained in part by the fact that all vixens are foxes—i.e., by v. By the first assumption, there are some facts ∆ such that v ∈ ∆ and S is true purely in virtue of ∆. And by the second assumption, any full explanation of the truth of S will differ from ∆ only in ways that result from moving up or down the explanatory hierarchy. So, since Γ fully explains the truth of S, Γ must contain one of the following: v itself, some facts that fully explain v, or some fact that v plays a role in explaining and that itself plays a role in explaining the truth of S. In order to justify our inference, then, we need only rule out the third of these possibilities. Note, further, that on the standard view of the relationship between v (i.e., the fact that all vixens are foxes) and the fact that S is true, there just aren’t any facts between these two in the explanatory hierarchy. Instead, the fact that all vixens are foxes and the fact that S says that all vixens are foxes, taken together, constitute a full, direct explanation of the truth of S. And if this is right, the third possibility can be ruled out: if there doesn’t even exist any fact that v plays a role in explaining and that itself plays a role in explaining the truth of S, then Γ certainly doesn’t contain any such fact. Our inference, then, turns out to be a good one if we add the following to our premise set: (4) ¬∃f (S is true partly in virtue of f ∧f obtains partly in virtue of all vixens’ being foxes) And it’s fairly clear that proponents of the objection from worldly fact are committed to this premise—again, it just follows from the standard view of the relationship between the truth 45 of S and the fact that all vixens are foxes, and proponents of the objection, as I’ve suggested, are committed to that view. Furthermore, even independently of its following from the standard view of the rela- tionship between the relevant facts, this premise seems unproblematic—I’m not aware of any view according to which all vixens’ being foxes plays a role in explaining some other fact that plays a role in explaining the truth of S, and it’s hard to imagine what a plausible view of this kind might look like. So we’ve got a reasonable first pass at a reconstruction of the objection from worldly fact: it gets us the intended conclusion, (C), via an argument from (1), (2), (3), and (4), all of which plausibly do play a role in the objection as usually sketched, and furthermore, the argument is valid given our two assumptions about the logic of the in-virtue-of relations. So this is the version of the objection we’ll be working with, at least for the moment. Now: as I said, the argument here is valid. And conventionalists, in order to hold on to their view, need to resist the conclusion, which means they need to find a tenable theory on which at least one of the premises fails to be true. In the next section I consider strategies for doing so. 4 Avenues of resistance Our first order of business here is to decide which of the argument’s four premises conven- tionalists ought to consider rejecting. And we can set one of the possibilities aside immedi- ately. As I’ve suggested, (1) is clearly true—on any remotely intuitive understanding of what it is for a fact to be about something, the fact that all vixens are foxes is a fact about what vixens are like, not a fact about what conventions are in place in our linguistic community. It’s also relatively easy to set aside the possiblity of rejecting (4). As I said above, this premise seems unproblematic—it’s difficult to imagine a plausible view on which it’s false. And even if such a view were available, it would be no help to conventionalists, for the fol- lowing reason: the argument turns out to remain valid even if we replace (4) with the strictly 46 weaker claim that there’s no fact about convention that lies between the truth of S and the fact that all vixens are foxes in the explanatory hierarchy, and conventionalists themselves, as it turns out, definitively are committed to this weaker claim.39 Given all that, trying to avoid (C) by rejecting (4) is, for conventionalists, a nonstarter. So conventionalists must find a way to reject either (2), which says that facts about con- vention don’t fully explain the fact that all vixens are foxes, or (3), which says that the fact that all vixens are foxes plays some role in explaining the truth of S. Our question, then, is which of these premises (if any) is a suitable target. And at least one theorist has recently responded to the objection from worldly fact by suggesting that there’s reason for conven- tionalists to be suspicious of (3). That premise, recall, is just a consequence of the standard view of the relationship be- tween the truth of S and the fact that all vixens are foxes, according to which the latter fact and the fact that S says that all vixens are foxes, taken together, fully explain the truth of S. But Warren (2015b: 91) argues that conventionalists have reason to think this stan- dard view gets the order of explanation backward: conventionalist views “are most naturally 39 The weaker premise can be stated as follows: there’s no fact about convention that v (i.e., the fact that all vixens are foxes) plays a role in explaining and that itself plays a role in explaining the truth of S. Or, more formally: (4− ) ∀f (f ∈ C → ¬(S is true partly in virtue of f ∧ f obtains partly in virtue of all vixens’ being foxes)) And we can see why conventionalists are committed to this premise by thinking again about the Naturalist- Friendliness constraint. The reason conventionalism has any hope of meeting that constraint in the first place, after all, is that facts about convention are supposed to be facts our reliability about which isn’t mysterious by naturalist lights. But v is a fact about how things are with vixens, a fact our reliability about which is mysterious. So any facts that obtain even partly in virtue of v are also going to be facts our reliability about which is mysterious. And that means that, if conventionalism is to do the epistemological work it’s intended to do, v can’t have any role to play in explaining facts about convention. That is, conventionalists are committed to the following claim: (4′ ) ∀f (f ∈ C → ¬(f obtains partly in virtue of all vixens’ being foxes)) So, since (4− ) is strictly weaker than (4′ ), conventionalists are committed to (4− ) as well. As for why the argument remains valid when (4) is replaced with (4− ), the reasoning is roughly as follows. (For a full demonstration, see my proof of the Replacement Validity Thesis in §a3 of this chapter’s appendix.) Recall that the point of adding (4) as a premise was to ensure that there’s some fact not about convention in Γ by ruling out the following possibility: that Γ contains some fact f such that v plays a role in explaining f and f plays a role in explaining the truth of S. But even if this possibility can’t be ruled out, there’s no problem for the objection from worldly fact as long as no such f is a fact about convention—i.e., as long as (4− ) is true—for the simple reason that, if Γ does contain any such f , it thereby contains a fact not about convention. 47 launched against the background of meta-semantic theories that make dispositions to use sentences central and take any explanatory work done by propositions [and, presumably, by worldly (i.e., nonlinguistic) facts] to be less fundamental”. Or, to put Warren’s point another way: While conventionalists need not be skeptical of the existence of propositions, worldly facts, and the like, they should deny that such entities have any role to play in explaining the truth of sentences that are true by convention. After all, linguistic conventions plausibly are conventions for the use of expressions, and it seems odd to suppose that, even when our conventions for the use of a sentence meet whatever conditions they need to meet to provide for a full explanation of the truth of that sentence, that explanation must nevertheless take a detour out of the realm of language and into the realm of worldly facts (or propositions). One might have thought an explanation of one linguistic phenomenon in terms of another wouldn’t need to be so indirect. If this is right, conventionalists have good reason to reject (3) even independently of its role in the objection from worldly fact. Though Warren’s claims have a certain plausibility, it seems to me that one of the central theoretical roles of facts (and propositions) is to explain the truth values of sentences. So I worry that, insofar as we accept the existence of worldly facts (or propositions) at all, we may be committed to giving them a role to play in explaining the truth values of even those sentences that are true by convention. In any case, though, we need not discuss this matter further, simply because there’s a more fundamental difficulty here: rejecting (3)—whether there’s independent reason to do so or not—turns out to be of no help to conventionalists. To see why this is, though, will require thinking a bit more about conventionalists’ episte- mological commitments. Recall that, by Object-Level Belief, conventionalists owe an explanation of our ability to reliably form true object-level beliefs about facts such as the fact that all vixens are foxes. And their general strategy is (of course) to explain our reliability about these sorts of nonempir- ical matters by appeal to linguistic conventions. Presumably, then, what they’ll need to do here is to appeal to our conventions for the use of S and then to say something about what 48 the connection is supposed to be between those conventions and the fact that all vixens are foxes. (A connection of the right sort will, one hopes, be able to do the epistemological work of explaining, in a way that respects Naturalist-Friendliness, our access to that fact.) But what exactly should they say? One story to tell here, the standard story, is that the connection is as follows: the con- ventions for the use of S and the fact that all vixens are foxes, taken together, fully explain the truth of S. To endorse (3) is to embrace this story, and the upshot of the objection from worldly fact (as I’ve reconstructed it) is that—unless the fact that all vixens are foxes itself obtains by convention—this story is inconsistent with conventionalism. Warren’s point, though, is that conventionalists can (and perhaps should) respond to the objection by denying that this story is mandatory. And this is fair enough—perhaps it’s true that opponents of conventionalism are illegitimately relying on a contentious metasemantic theory. But we need to remember that conventionalists still owe some story here. They need to say something about the connection between our conventions for the use of S and the fact that all vixens are foxes, and the story they tell must give them the resources to explain our access to this latter fact. Furthermore, Naturalist-Friendliness places strict constraints on what the story may look like—it tells us that the resulting explanation of our access to the fact that all vixens are foxes must not require appeal to any facts our access to which remains mysterious. The question, then, is what sort of story, if any, remains available. And the answer, I claim, is that there’s no story for conventionalists to tell here. After all, if they deny that the fact that all vixens are foxes—i.e., v—plays a role in explaining the truth of S, only three possibilities remain: (i) the truth of S fully explains v; (ii) the truth of S plays a role in explaining but doesn’t fully explain v; or (iii) neither plays a role in explaining the other. And conventionalists, it turns out, can’t accept any of these. I consider them in turn. In a certain sense, (i) is the most promising—it ensures that there’s a tight explanatory relationship between v and the truth of S, a relationship that can explain how our access to 49 the latter might give us access to the former. The problem with (i), though, is straightfor- ward: it doesn’t actually give conventionalists a way to avoid (C). And this is independent of epistemological considerations. To embrace the claim that the truth of S fully explains the fact that all vixens are foxes, after all, is just to replace (3) with the following: (3′ ) All vixens are foxes purely in virtue of S’s being true And the argument for (C) remains valid when (3) is replaced with (3′ ). In fact, (1) and (4) aren’t even needed in this new version of the argument: (C) follows just from (2) and (3′ ). By (3′ ), v obtains purely in virtue of the truth of S, which means that, if S itself were true purely in virtue of facts about convention, v would thereby also obtain purely in virtue of facts about convention.40 But (2) tells us that v doesn’t obtain purely in virtue of facts about convention. By (2) and (3′ ), then, S can’t be true purely in virtue of facts about convention, which is to say that (C) must be true. So it’s easy to show that, if conventionalists embrace (i), the objection from worldly fact loses none of its force. Showing why conventionalists can’t embrace either (ii) or (iii), though, isn’t so straightforward—neither of them leads to outright inconsistency with con- ventionalism. That is, it’s possible to consistently embrace either of them while endorsing (1), (2), (4),41 and the negation of (C). The problem is instead an epistemological one: nei- ther (ii) nor (iii) can give conventionalists the resources to explain, in a naturalist-friendly way, our near-perfect reliability about facts like the fact that all vixens are foxes. And this is just to say that any conventionalist view that embraces either claim will thereby fail to do the epistemological work conventionalism is intended to do. We can see why this is by noting that, if either claim is correct, then conventionalists have no way of explaining why S’s being true by convention is sufficient to guarantee that v obtains. Let’s suppose that either (ii) or (iii) is true: either the truth of S plays some role in explaining v but doesn’t fully explain it, or neither plays any role in explaining the other. 40 Assuming, again, that the purely-in-virtue-of relation is transitive. 41 Or (4− ). 50 In either case, there are some facts that play a role in explaining v but that aren’t explained by the truth of S.42 Furthermore, at least one of these facts—call it f —can’t either be a fact about convention or obtain purely in virtue of facts about convention (nor can its role in ex- plaining v be derivative of its role in explaining any fact about convention), since (2) rules out the possibility that v obtains purely in virtue of convention. So, if conventionalists are to explain our near-perfect reliability about facts like v (and thereby meet the Object-Level Belief constraint), they must explain how we can be sure, in cases like this, that some suit- able f obtains. Otherwise, they won’t be able to explain our ability to rule out the following possibility: that v fails to obtain as a result of there being no suitable f available. But conven- tionalism just doesn’t give us the resources to explain, in any naturalist-friendly way, how we have access to any such f .43 And that means that conventionalists who go in for either (ii) or (iii) will thereby be unable to meet the Naturalist-Friendliness constraint. Summing up: conventionalists can’t avoid (C) by replacing (3) with either the claim that v obtains only partly in virtue of the truth of S or the claim that neither obtains even partly in virtue of the other. To try to do so, after all, would be to abandon the central task for which 42 Unless, of course, v is fundamental. But if that’s so, S’s being true by convention certainly can’t guarantee that v obtains—whether v obtains, in that case, has nothing whatsoever to do with what conventions are in place. 43 I want to note that there’s a gap in this argument. We haven’t ruled out the possibility that the relevant f plays some role in explaining v and also, independently, plays some additional role in explaining some fact about convention c (i.e., that f ’s role in explaining v isn’t merely derivative of its role in explaining c). If this were true, it might be possible to tell some story of the following form: Since c is a fact about what conventions are in place, it’s no mystery how we have access to c. But facts about convention aren’t fundamental—they obtain in virtue of certain other facts about us, facts about our linguistic dispositions, brain states, etc. Our access to c, then, puts us in a position to be sure also that some other facts obtain (namely, the facts, whatever they are, in virtue of which c obtains). So, if we’re somehow in a position to be sure that, whenever c obtains, one of the facts in virtue of which it obtains will also, independently, play a role in explaining v, then we may be able to be sure, just by knowing what conventions are in place, that some suitable f will be available. But there are at least two problems with this strategy. First, no plausible story of this sort has ever been given, and it’s not at all clear what a plausible story might look like. And second (and more importantly), any motivation for endorsing (2) is equally a motivation for ruling out the possibility that f has independent roles to play in explaining v and c. The central reason for endorsing (2), after all, is to respect the thought that facts about how things are with us can’t fully explain facts about how things are with the world outside of us, such as the fact that all vixens are foxes. But if f plays a role in explaining c, then f is a fact about how things are with us, despite the fact that it’s not itself a fact about convention. So, if conventionalists embrace this strategy, they’re still forced to say that v obtains purely in virtue of facts about us. And since the motivation for endorsing (2) in the first place is to avoid that result, conventionalists who remain committed to (2) shouldn’t embrace this strategy. 51 conventionalism was designed: to provide a naturalist-friendly explanation of our ability to reliably form true object-level beliefs in certain domains. And we’ve already seen that conventionalists also can’t avoid (C) by replacing (3) with the claim that v obtains purely in virtue of the truth of S, since the argument for (C) remains valid when we replace (3) with (3′ ). So, since the only way to reject (3) is to embrace one of those three claims, there’s no way for conventionalists to avoid (C) just by rejecting (3). What this tells us is that conventionalists’ only remaining option is to reject (2)—doing so is the only way they can resist (C) and thereby hold on to their view. That is, those who want to affirm that facts about convention fully explain the truth of the sentence “All vixens are foxes” are thereby also committed to affirming that facts about convention fully explain the fact that all vixens are foxes. To maintain their view, then, conventionalists must find a way to make this latter claim plausible. This, then, is my primary conclusion in this chapter: if conventionalism is to remain a live option, conventionalists need to provide a plausible metasemantic theory according to which certain facts about the world outside of us (such as the fact that all vixens are foxes) obtain by convention alone. And the upcoming chapter is devoted to showing that such a theory is available. But before we move on to that chapter, we need to address one other problem. The problem is this: the very idea that facts about the world might obtain by convention is widely believed to be obviously absurd. And if this belief is correct, conventionalism is, of course, false: we’ve just seen that the only way to avoid (C) is to deny (2), which means that, if denying (2) is absurd, then (C) is unavoidable. Opponents of conventionalism, then, will surely see my conclusion here as a vindication of their point of view. But we should be careful: the argument we’ve been discussing is conclusive against con- ventionalism only if it is indeed obvious that facts about the world can’t obtain by convention. Our question, then, is what reasons conventionalism’s opponents can give for taking this lat- ter claim to be obvious. (Surely the obviousness isn’t supposed to just be intuitive—one of 52 our reasons for taking conventionalism seriously in the first place, after all, is that there are concerns about the epistemic status of mere appeals to intuition.) And the answer usually given is this: the claim that facts about the world can be a matter of convention is obviously false for the simple reason that it has absurd implications, implications that are inconsistent with certain of the things we take to be a priori. Conventionalists, then, have a way forward here only if it can be shown that their doctrine doesn’t in fact have the implications in ques- tion. As it turns out, this work can be done. In fact, it has already been done—theorists sympa- thetic to conventionalism have shown that the doctrine doesn’t have the absurd implications it’s often taken to have. But these theorists’ findings have, I think, gone underappreciated, and so I end this chapter with a brief discussion of what the absurd implications of denying (2) are purported to be and what conventionalists can say about those purported implica- tions. 5 Two objections: Idealism and contingency Those who endorse the claim that facts about convention can’t fully explain the fact that all vixens are foxes—i.e., those who endorse (2)—aren’t always explicit about their reasons, but as far as I can tell, these theorists tend to be motivated by one or both of the following sorts of worry: that denying (2) would commit us to idealism and that denying it would commit us to incorrect verdicts about the truth values of certain modal claims. So, if we can show that the denial of (2) doesn’t in fact carry with it these commitments, we can thereby show that conventionalism remains worth exploring. The first thing to note here is that the basic thought underlying both sorts of worry is the following: If the fact that all vixens are foxes obtains purely in virtue of facts about what conventions are in place, then our conventions somehow determine whether all vixens are foxes, which means that whether all vixens are foxes will be in some way sensitive to whether the relevant facts about convention obtain. That is, whether all vixens are foxes is going to 53 vary depending on what conventions are in place. This sort of convention-sensitivity is what’s thought to be at the root of both of the problematic commitments mentioned above. I discuss them separately. It’s easy enough to see how convention-sensitivity gives rise to the worry about modal claims. Facts about what conventions are in place, after all, are contingent: our linguistic conventions could easily have been different. So, if it’s really the case that whether a given fact f obtains varies depending on whether certain facts about convention obtain, it’s natural to infer that, in at least some of the nearby possible worlds where the relevant facts about convention don’t obtain, f doesn’t obtain either. And if that’s right, then modal claims such as the following turn out to be true: • It’s contingent that all vixens are foxes.44 • Necessarily, if we accept the sentence “All vixens are female goats” as a matter of con- vention, then all vixens are female goats.45 • If, as a matter of convention, we applied the word vixen to all and only those things to which we were willing to apply both female and goat, not all vixens would be foxes.46 And so on. The problem, of course, is that each of these modal dependence claims seems obviously false—the supposition that whether all vixens are foxes is sensitive to contingent facts about convention appears to have led us into absurdity.47 44 One theorist who takes conventionalists to be committed to claims of this sort is Simon Blackburn (1986: 121)—he argues that necessities can’t be explained by contingent facts about convention, since “the explanation, if good, would undermine the original modal status: if that’s all there is to it, then [for example] twice two does not have to be four”. 45 This claim is a variant of the one that’s the basis of Lewy’s (1976) central objection to conventionalism, as discussed in §1 above. 46 Considering counterfactuals like this one is the usual way of getting at the worry here. For example, both Boghossian (1996) and Barry Stroud (1984) cite conventionalists’ purported commitment to such counterfac- tuals as a serious problem for conventionalism, as does C. I. Lewis (1946: 148), whose argument against truth by convention goes via the following claim: “If the conventions were otherwise, the manner of telling would be different, but what is to be told, and the truth or falsity of it, would remain the same”. 47 It has sometimes been suggested that conventionalists can respond to this sort of worry by rejecting the S4 schema ⌜◻A → ◻◻A⌝—doing so would allow them to say, for instance, that it’s necessary, but only contingently necessary, that all vixens are foxes. This response, though, strikes me as hopeless, for the following reason 54 As defenders of conventionalism have pointed out, though, this sort of worry is based on a misunderstanding. What conventionalists are committed to, again, is the claim that our conventions determine whether certain facts obtain, so that whether these facts obtain varies depending on what conventions are in place. And it’s tempting, to be sure, to conclude that conventionalists are thereby committed to the claim that different facts would have ob- tained had our conventions been different—it’s natural to suppose, after all, that the kind of covariance that’s in play here just entails the truth of modal dependence claims of this sort. But that supposition, reasonable as it may seem, turns out to be mistaken. It’s true enough that, in the usual cases where one fact determines another in such a way that we can change whether the second obtains by changing whether the first obtains (such as cases of causal dependence), we can read off the dependence from the facts’ modal pro- files. But given the special role conventions play in our linguistic practice, a straightforward explanation is available of why, in the particular case of worldly facts that vary depending on whether certain facts about convention obtain, we should expect the dependence not to be reflected in the modal relationships among these facts. Observe, first of all, that our con- ventions have an important role to play even in our descriptions of possible worlds other than our own: when we talk about what’s the case in possible but nonactual situations, we do so using our own language, with all its attendant conventions. Furthermore, one of our conventions, plausibly, is to be willing to apply the predicate vixen to anything to which we’re willing to apply both female and fox—regardless of whether the thing we’re talking about exists in the actual world or in some other possible situation in which our conventions (i.e., the conventions of our counterparts) are different.48 And if that’s right, then we can ex- (among others): In a nearby world in which it’s possible that not all vixens are foxes (because, say, the convention in that world is to apply vixen to female goats instead of to female foxes), it may well also be true that not all vixens are foxes (if, say, the world is one where there are female goats). So the only way for us, here in the actual world, to preserve the necessity of the truth that all vixens are foxes is to maintain that such a world is impossible—i.e., to say that, though it’s possible for us to have a convention of applying vixen to female goats instead of to female foxes and it’s also possible for there to exist female goats, a world that meets both of these conditions isn’t possible. But there’s no reason whatsoever to think such a world is impossible. (For more on this point, see Sidelle 2009: §2.) 48 As Wright (1985: 190) puts it, our talk is governed by the general convention that “what it is true to say of a hypothetical state of affairs…is to be determined by reference to our actual linguistic conventions, even 55 plain why the fact that all vixens are foxes doesn’t modally depend on our conventions, and the explanation has nothing to do with whether conventionalism is correct—it simply has to do with what our conventions direct us to say about possible worlds other than our own. (Consider: there’s nothing obviously incoherent about a language in which the convention for describing what’s the case in a possible but nonactual situation in which our linguistic conventions are different is to defer to the linguistic conventions that are in place among our counterparts in that situation.49 We don’t in fact speak such a language, but we very well could have. The fact that we’ve chosen not to speak such a language is surely irrelevant to whether conventionalism is correct.) To put the point slightly differently: The reason we tend to understand the sort of covari- ance that’s in play here in terms of modal dependence is that thinking about what’s the case in possible but nonactual situations is our usual method for getting a grip on how facts are related to one another—we learn about the determination relations among facts by imag- ining worlds where certain facts don’t obtain and so can’t participate in any such relations. But in the particular case of facts about convention, this method is ineffective. Given the way our conventions work, thinking about possible worlds with different conventions is not a way of preventing our actual conventions from playing a determinative role: again, even when we talk about possible worlds other than our own, our talk is governed by our actual conventions. What this suggests is that thinking about what’s the case in possible but non- actual situations just isn’t the right way to settle the question of whether the fact that (e.g.) all vixens are foxes obtains purely in virtue of facts about convention. Of course, this isn’t the end of the discussion. It’s still the case that the usual way of making sense of the sort of covariance that’s in play here is in terms of modal dependence. Conventionalists, then, need an alternative way of understanding the claim that whether a if those are not the conventions that would then obtain”. And Sidelle (1989: 7) explains, similarly, that “if it is a convention of ours that nothing in any possible situation counts as water if it is not composed of H2 O, then this very convention tells us that in the subclass of possible situations in which we have different conventions, still, nothing counts as water that is not H2 O: that is, that it is necessary that water is H2 O”. 49 This point often seems to be in the background of discussions by defenders of conventionalism, but the most explicit statement of it comes from Wright (1985: 191). 56 given fact obtains varies depending on what conventions are in place, one on which that claim can be true even if all the usual modal dependence claims are false. That is, they need to say just what it is for a fact to be convention-sensitive, if convention-sensitivity doesn’t entail modal dependence. What (if anything) can they say here?50 Fortunately, a satisfying answer is available. The motivation for conventionalism, remem- ber, is epistemological: the goal is to provide a naturalist-friendly explanation of how we manage to have nonobservational access to facts in certain domains. And as we’ve just seen, conventionalists, in order to do this explanatory work, must provide a story on which the facts in question obtain purely in virtue of our linguistic conventions. For the convention- alist strategy to be successful, then, the purely-in-virtue-of relation—whatever sort of ex- planatory relation it turns out to be, in the end—must be such that, when facts obtain purely in virtue of convention, there’s no mystery about how we have nonobservational access to those facts. And for that to be so, it must be that our conventions can’t lead us astray: the relevant facts must somehow be guaranteed to obtain given that the associated conventions are in place. So, for any sentence we accept as a matter of convention, what that sentence says is going to be true—there’s no way for our conventions to be mistaken.51 This, then, is the sense in which the fact that all vixens are foxes is convention-sensitive, according to conventionalism: given that the relevant conventions governing our use of the sentence “All vixens are foxes” are in place, the truth of that sentence is guaranteed—as is the obtaining of a corresponding fact. And on this way of understanding convention-sensitivity, conventionalists’ commitment to the convention-sensitivity of the fact that all vixens are foxes doesn’t commit them to any of the above modal dependence claims. They can instead endorse the usual picture about possible but nonactual situations in which our conventions are different: that, in a possible situation in which (say) we applied the word vixen to all 50 Sider (2011: 102) expresses a version of this worry, saying that it’s “unclear just what sort of dependence of truth upon conventions is supposed to be distinctive of conventionalism” given that “the conventionalist will surely deny counterfactual or temporal dependence”. The worry also appears to be the basis of Crawford Elder’s case against conventionalism—he suggests that denying that counterfactual dependence follows from a fact’s obtaining by convention would be “a desperate move indeed” (2006: 14). 51 This is a version of a point made by Sidelle (2009: 234–235). 57 and only those things to which we were also willing to apply both female and goat, the sentence “All vixens are goats” would be true, despite the fact that all vixens would still be foxes (and so wouldn’t be goats), since the sentence would then mean something different than it does in our actual language. What conventionalists add is only that the sentence “All vixens are goats” is guaranteed to be true in that situation, in the sense that no cooperation from the convention-independent world is required—the truth of the sentence is a matter of convention alone and so isn’t hostage to whether some suitable corresponding fact is an- tecedently available. And it’s clear enough that the addition of this claim doesn’t render the picture incoherent. The worry about modal claims, then, has been answered. As it turns out, the claim that the fact that all vixens are foxes is convention-sensitive entails none of the obviously false modal dependence claims it’s been taken to entail. But thinking about the sort of convention-sensitivity that’s in play here brings us back to the other worry mentioned at the beginning of this section: that conventionalists’ commit- ment to convention-sensitivity might bring along with it a commitment to some absurd sort of idealism. After all, if the facts really do vary with convention in such a way that our con- ventions can’t be mistaken, it’s natural to try to explain this phenomenon by claiming that reality is mentally constructed. And that claim, historical pedigree notwithstanding, seems absurd.52 Here again, though, a response is available, and we can again get a grip on it by remem- bering that the epistemological purpose of conventionalism is to explain how we manage to access certain facts our access to which is otherwise mysterious. This means that, in order for conventionalism to have any epistemological work to do, there must exist some such facts. That is, there must be domains of facts such that questions about whether those facts obtain can’t be settled by what’s given to us via observation (or by any other means, though conven- 52 Explicit expressions of this worry in the literature are rare. (Stroud’s (1984: chap. 5) discussion of Carnap’s conventionalism is an exception, although Stroud seems to take the worry about idealism to be equivalent to the worry about modal claims.) Still, I’ve heard it expressed fairly often in conversation, and so I think it’s worth saying something about. 58 tionalists tend to be naturalists and empiricists and so tend not to countenance other means of access). After all, the only domains about which there’s any reason to be a conventionalist are those domains for which we don’t otherwise have a satisfying epistemology—i.e., those about which observation is neutral (as is any other means of access to the world we take our- selves to have). So, on any reasonable conventionalist view, the only facts that can be fully explained by our conventions are facts in these epistemologically problematic domains. Even according to conventionalists, then, the power of conventions is limited: only cer- tain sorts of questions about what the world is like—namely, those that we don’t otherwise have any good way of answering—may have the answers they do as a matter of convention. (Exactly which questions these are is up for debate, of course; which domains of facts are amenable to a conventionalist treatment will depend on just how much of our access to the world is unproblematic—that is, on just how much we can take to be given to us, by observa- tion or any other means. Determining the limits of a reasonable conventionalism, then, will require serious epistemological investigation. But we need not engage in any such investi- gation here—our discussion can remain schematic.) And the questions we do have a good way of answering may very well have answers that have nothing to do with our mental states, which means there remains room for a mind-independent reality. According to Sidelle, for instance, his modal conventionalism isn’t “in any interesting way idealist” despite the fact that he takes the modal properties (and hence the boundaries) of objects to be a matter of convention, since “the ‘material’ [out of which we carve objects via our conventions] is taken to be wholly mind-independent” (2010: 109). And if we abstract away from Sidelle’s particular brand of conventionalism, we can see more generally how a conventionalist picture is going to work: “The world”, as Einheuser (2006: 461) puts it, “provides some material, the substratum (or stuff ), which is neutral with respect to the features that are taken to be conventional”, and “onto this substratum, features of the kind in question can be conventionally imposed in many different ways”. The substra- tum, in other words, consists of what’s given to us via observation (or by some other unprob- 59 lematic means of access), and we, in adopting the relevant conventions, lay some additional structure over the top of this substratum. And insofar as worldly structure is conventional in this way, facts about how the world is structured obtain purely in virtue of convention, since they obtain regardless of what the substratum is like.53 Now, exactly how much of the world’s structure is taken to be conventional in this way will depend on the limits of the particular conventionalist theory we’re talking about—our conventions may, for instance, be responsible for the modal properties of material objects, or for all properties of material objects, or for the entire realm of abstract objects, or for all of the above and more. But again, we need not settle the details here. For our purposes, the point is just that there’s a substratum, and it’s distinct from whatever is conventional. And this means that the sort of convention-sensitivity to which conventionalists are committed does not entail that reality is mentally constructed: even if certain of the facts of the world obtain purely in virtue of convention (and so are convention-sensitive), the possibility remains that there exists an entirely mind-independent realm (i.e., the substratum) and that what explains why the facts in question obtain is just that we have, by convention, overlaid some additional structure on this mind-independent realm.54 What all this shows is that conventionalists’ commitment to the claim that facts about convention fully explain the fact that all vixens are foxes—and to the associated claim that whether this latter fact obtains is sensitive to whether the relevant facts about convention obtain—doesn’t in fact carry with it either of the commitments that have motivated oppo- nents of conventionalism to reject those claims (i.e., to endorse (2)). So the quick dismissals 53 Einheuser uses this idea to develop a conventionalist-friendly generalization of possible world seman- tics in which a world is understood as a pair containing a substratum and a “carving” (roughly, a function from substrata to arrangements of whatever features are taken to be conventional), and then she uses this for- mal framework to give a rigorous argument showing that convention-sensitivity doesn’t entail counterfactual dependence. (Jonathan Livingstone-Banks (2016) provides an alternative formal framework in the spirit of Einheuser’s, though I’m not convinced that his worries about Einheuser’s own framework are compelling.) 54 In any case, I’m not sure the idealism question is helpful. A conventionalist theory in a given domain either does or does not answer the epistemological question of how we manage to have nonobservational access to facts in that domain, and if it does, the fact that it does either is or is not sufficient to render the theory more attractive than its alternatives. The additional question of whether or in what sense it counts as an idealist theory doesn’t seem particularly probative. 60 that are common in the literature are all of them premature. Conventionalism stands un- refuted. I suggest, then, that, given its unique epistemological advantages, conventionalism remains worth exploring. But this isn’t to say that our work is done. So far, we’ve been able to show that the claim that facts about convention can fully explain facts about the world isn’t obviously absurd, which is a start. But to demonstrate that conventionalism is really a tenable view, we need to do much more: we need to provide a plausible metasemantic theory on which that claim is true. This is my task in Chapter 3. 61 Appendix to Chapter 2 The Logic of the In-Virtue-Of Relations a1 The relations in question As suggested in the foregoing discussion, the in-virtue-of relations relevant to conventional- ism are certain sorts of noncausal, nonpragmatic relations of explanatory dependence. (Re- call that we’re taking no official stand about whether they’re grounding relations.) The fol- lowing rough classification will be convenient: • A fact f obtains (at least) partly in virtue of a fact g just in case g plays a role in ex- plaining f . • A fact f obtains (at least) partly in virtue of a (nonempty) set of facts Γ just in case each fact in Γ plays a role in explaining f . • A fact f obtains purely in virtue of a fact g just in case g fully explains f . • A fact f obtains purely in virtue of a (nonempty) set of facts Γ just in case the facts in Γ, taken together, fully explain f . It’s clear, given these rough construals, that, in order to give a full account of what these in- virtue-of relations come to, we’d need to specify just what sort of explanation is in play here. And as I’ve suggested, now is not the time to give such a specification—we should instead try to generate a conventionalist-friendly metasemantic theory and then think about what sorts of explanatory claims it entails. But since we don’t have such a theory in hand at this point, we’re not yet in a position to say much of substance about the sort of explanation that’s 62 in play. Still, we can say something now about what the in-virtue-of relations are going to look like. After all, any (nonpragmatic) explanatory relation is going to have certain structural properties. (For instance, since explanation, plausibly, can’t go in a circle, an explanatory de- pendence relation must be both irreflexive and asymmetric.1 ) In addition, there are obvious structural connections between the relevant relations. (For instance, the rough construals given above entail the following two claims, among many others: first, a fact f obtains at least partly in virtue of a nonempty set of facts Γ just in case, for each fact g in Γ, f obtains at least partly in virtue of g; and second, if f obtains purely in virtue of g, then f obtains at least partly in virtue of g.) And finally, conventionalists have certain commitments about how the relations work, as we saw in the foregoing discussion. (They’re committed, for in- stance, to the possibility of providing different full explanations of a given fact by appealing to facts at different levels in the explanatory hierarchy.) Given all that, there are going to be certain formal constraints on these relations. In this appendix I give a partial character- ization of the relations by laying out some of those constraints, and then I show that the constraints entail the truth of certain claims to which I appealed in my above discussion of the objection from worldly fact. a2 Formal characterizations of the in-virtue-of relations We have four distinct relations to characterize: an at-least-partly-in-virtue-of relation whose relata are two facts, a purely-in-virtue-of relation whose relata are two facts, and analogs of each of these whose relata are a fact and a set of facts. (The former two I’ll sometimes call the single-fact versions of the at-least-partly-in-virtue-of and purely-in-virtue-of relation, 1 Thus the in-virtue-of relations are formally similar to what Fine calls relations of strict—rather than weak— ground. (His relations of weak ground, which allow self-grounding, aren’t explanatory relations in the relevant sense. He admits as much, saying that, although (for instance) John’s marrying Mary weakly grounds John’s marrying Mary, “we cannot say that [it] ground[s] in our original sense, since John’s marrying Mary [does] not account for John’s marrying Mary” (2012b: 3).) 63 respectively, and the latter two I’ll sometimes call the set versions.2 ) But for reasons that will become clear in a moment, it turns out that the simplest way to give a precise account of how these relations are structurally connected is to characterize them in terms of another relation between two facts, which I’ll call the immediately-in-virtue-of relation. This relation is construable roughly as follows: • A fact f obtains immediately in virtue of a fact g just in case g plays a role in explain- ing f but doesn’t do so just by playing a role in explaining some other facts which themselves have roles to play in explaining f . That is, f obtains immediately in virtue of g just in case the explanation of f in which g fig- ures is direct, in the sense discussed in §3 of the foregoing discussion.3 Consider, for exam- ple, the fact that a given natural-language sentence has the meaning it does. This fact—call it m—is explained by various facts about what linguistic conventions are in place, and those facts are explained by various facts about the dispositions, brain states, etc., of members of the linguistic community. Let d be one of these latter facts—say, a fact about a particular community member’s disposition to apply some word in a particular set of circumstances. Then m obtains at least partly in virtue of d, but it doesn’t obtain immediately in virtue of d: d figures in explaining m only because it figures in explaining some fact about convention c that itself explains m. Plausibly, though, m does obtain immediately in virtue of c. After all, it’s not the case that some convention’s being in place plays a role in explaining the sentence’s having the meaning it does by playing a role in explaining some third fact—all it is for the sentence to have the meaning it does is for the relevant conventions to be in place. That’s the intuitive idea, anyway. For our purposes here, though, we can, if we like, think of the immediately-in-virtue-of relation as a mere formal device. What’s useful about it is that, if we treat it as a primitive in terms of which the relations we’re really interested in are 2 Both Rosen and Fine think of the single-fact versions of their grounding relations as special cases of the set versions. For our purposes here, though, it will be simpler to give separate characterizations of the single-fact versions of the in-virtue-of relations. 3 I want to note that my construals of the different flavors of in-virtue-of relation are strikingly similar in structure to Bernard Bolzano’s construals of certain grounding relations (see, e.g., his 1837/1972: bk. 2, pt. 3). 64 to be characterized, we end up with a relatively simple formal structure that gives us an easy way to distinguish between the following two sorts of case: (a) A fact h plays a role in explaining f only because h plays a role in explaining g, which itself plays a role in explaining f . (b) A fact h plays a role in explaining g, which plays a role in explaining f , but h also has a different role to play in explaining f , some role that has nothing to do with g. And it turns out, for reasons I explain below, that we need to be able distinguish (a) from (b) if we want to give a complete account of the structural connections between the at-least-partly- in-virtue-of relations and the purely-in-virtue-of relations. So I proceed as follows: first, I say something about what structural constraints there are on the immediately-in-virtue-of relation, and then I explain how to characterize the rest of our relations in terms of it.4 (There are ways to distinguish (a) from (b) without assuming that the in-virtue-of relations we’re interested in are to be characterized in terms of the immediately-in-virtue-of relation, but they give rise to irrelevant complications. I proceed as I do for the sake of simplicity.) Now: let “Š” stand for “obtains (at least) partly in virtue of ”, so that ⌜f Š g⌝ says that f obtains at least partly in virtue of g and ⌜f Š Γ⌝ says that f obtains in virtue of Γ. Similarly, let “Šp ” stand for “obtains purely in virtue of ”, so that ⌜f Šp g⌝ says that f obtains purely in virtue of g and ⌜f Š Γ⌝ says that f obtains purely in virtue of Γ, and let “Ši ” stand for “obtains immediately in virtue of ”, so that ⌜f Ši g⌝ says that f obtains immediately in virtue of g. We start, then, by saying something about the restrictions on when it can be the case that f Ši g. (Given that we’re treating the immediately-in-virtue-of relation as a primitive, we won’t, of course, be giving a full characterization.) The general constraint from which many of the others we’re interested in are going to follow is that this relation must be such that its 4 The immediately-in-virtue-of relation is similar to Fine’s relation of immediate ground (see 2012a: 50–51), but the latter relation doesn’t appear to play a role in Fine’s official logic of ground. 65 ancestral (i.e., its transitive closure) is irreflexive: ¬(f Š∗i f ) General Constraint The basic reason for this constraint is that the immediately-in-virtue-of-relation is an ex- planatory dependence relation, and as I said above, there can’t be explanatory circles. The constraint just prohibits all such circles. We make the constraint explicit here so that we can use it later to prove that our other relations, characterized in terms of this one, also don’t allow explanatory circles. Before we move on, I want to point out some of the immediate consequences of the General Constraint. First, the immediately-in-virtue-of relation itself is both irreflexive and asymmetric. Proof of the irreflexivity of f Ši g. If f Ši f , then f Š∗i f . But that’s a violation of the Gen- eral Constraint. ◻ Proof of the asymmetry of f Ši g. If f Ši g ∧ g Ši f , then f Š∗i f . But that again is a viola- tion of the General Constraint. ◻ And second, the ancestral of the immediately-in-virtue-of relation is asymmetric. Proof of asymmetry of f Š∗i g. Ancestral relations are transitive. So, if f Š∗i g∧g Š∗i f , then f Š∗i f , in violation of the General Constraint. ◻ Now to characterize the rest of our relations. The single-fact version of the at-least-partly- in-virtue-of relation is easy—it can be thought of as just identical to the ancestral of the immediately-in-virtue-of relation: f Š g ≡df f Š∗i g That is, f obtains at least partly in virtue of g just in case g either plays a role in directly ex- plaining f or plays a role in explaining some other fact which itself plays a role in explaining 66 f . Note that this relation, since it’s identical to the ancestral of the immediately-in-virtue-of relation, is irreflexive, asymmetric, and transitive.5 As for the set version of the at-least-partly-in-virtue-of relation, it’s clear from the rough construals given above that it can very easily be characterized in terms of the single-fact version, as follows: f Š Γ ≡df Γ ≠ ∅ ∧ ∀g(g ∈ Γ → f Š g) Questions about the irreflexivity, asymmetry, and transitivity of this relation are inapt—its two relata are of different types, after all—but it does have each of the following analogous properties: ¬(f Š Γ ∧ f ∈ Γ) Strong Irreflexivity ¬(f Š Γ ∧ g ∈ Γ ∧ g Š ∆ ∧ f ∈ ∆) Strong Asymmetry f Š Γ ∧ g ∈ Γ ∧ g Š ∆ → f Š (Γ − {g}) ∪ ∆ Strong Transitivity (The labels for these properties were taken from Rosen 2010.) Proof of the strong irreflexivity of f Š Γ. If f Š Γ ∧ f ∈ Γ, then f Š f . But we’ve already proved that ¬(f Š f ). ◻ Proof of the strong asymmetry of f Š Γ. If f Š Γ ∧ g ∈ Γ ∧ g Š ∆, then f Š g and, for any h ∈ ∆, g Š h. So, since we’ve already proved that ¬(f Ši g ∧ g Ši f ), we know that f ∉ ∆.◻ 5 Though I think this relation should turn out to be transitive, there may not be general agreement here— Schaffer (2012), for instance, has recently given some putative counterexamples to the transitivity of the partial grounding relation, which I take to be structurally similar to the at-least-partly-in-virtue-of relation we’re in- terested in here. In any event, though, the assumption of transitivity seems unproblematic in the sorts of cases relevant to the foregoing discussion. It’s clear enough, for example, that, if a sentence is true partly in virtue of some convention’s being in place, and if that convention is in place partly in virtue of some fact about the dispositions of a member of the linguistic community, then the sentence is true partly in virtue of the fact about the community member’s dispositions. 67 Proof of the strong transitivity of f Š Γ. If f Š Γ ∧ g ∈ Γ ∧ g Š ∆, then f Š g and, for every other fact h ∈ Γ, f Š h. Furthermore, for any d ∈ ∆, f Š d, since g Š d and the single-fact version of the at-least-partly-in-virtue-of relation is transitive. So f Š (Γ − {g}) ∪ ∆. ◻ The purely-in-virtue-of relations, though, are trickier. It may be tempting to characterize the set version by saying, for instance, that f Šp Γ just when Γ contains all and only those facts that play a role in explaining f —i.e., just when Γ = {g ∶ f Š g}. But that can’t be quite right: it entails that there’s a unique set of facts Γ such that f Šp Γ, but as I said in §3 of the foregoing discussion, there may be more than one such set of facts. If, for instance, f Šp Γ and Γ contains at least one nonfundamental fact, we can generate a different set of facts ∆ such that f Šp ∆ just by removing some nonfundamental fact g from Γ and replacing it with some other facts Λ such that that g Šp Λ—the facts in Λ, taken together, can play the same explanatory role(s) that g does, though only derivatively. The point is this: the facts that have a role to play in explaining a fact f form an explanatory hierarchy, and as a result, it’s often possible to give different full explanations of f by appealing to facts that can play the same explanatory role(s) despite being at different levels in that hierarchy. To properly characterize the purely-in-virtue-of relations, we must respect this thought, which means we need some understanding of the notion of explanatory role that’s in play here. And understanding this notion turns out to be more difficult than it may seem. It’s easy to show that the at-least-partly-in-virtue-of relations can’t by themselves do the necessary work here—the explanatory roles played by a given fact in an explanatory hierarchy don’t in general supervene on the at-least-partly-in-virtue-of relations that obtain in that hierarchy. Suppose, for instance, that f is a fact such that the only two facts relevant to explaining f are g and h, and suppose further that f Š g and g Š h. Then transitivity entails that f Š h, and irreflexivity and asymmetry then entail that the three single-fact at-least-partly-in-virtue- of relations mentioned so far are the only ones that obtain among f , g, and h. And that means we’ve settled all questions about what at-least-partly-in-virtue-of relations obtain in this hierarchy. But we haven’t yet settled all questions about explanatory role. After all, h may 68 have, in addition to its role in explaining g, a different, more direct role to play in explaining f , one that has nothing to do with g. Here’s why this is a problem: Plausibly, whether f Šp g depends on whether h does have this sort of additional role, for the simple reason that g can’t fully explain f if h has a role to play in explaining f that isn’t derivative of its role in explaining g. (That’s why I said above that we need to be able to distinguish between cases like (a) and cases like (b).) And if that’s right, then which purely-in-virtue-of relations obtain in a given case turns out to depend on facts about explanatory role to which the at-least-partly-in-virtue-of relations just aren’t sensitive. What this tells us is that the purely-in-virtue-of relations can’t be characterized in terms of the at-least-partly-in-virtue-of relations.6 In order to characterize the former relations, we need a way to distinguish between the different explanatory roles played by a single fact, the different explanatory routes in which that fact figures. And this is where the simple structure provided by the immediately-in-virtue-of relation is helpful—taking that relation as primi- tive allows us to describe explanatory relationships via a simple tree, with facts as the nodes and with each node connected to some number of other nodes by the immediately-in-virtue- of relation. This gives us a way of talking about explanatory routes: we can imagine tracing a path down the tree, starting at any node we like, with the only rule being that a move from a node fi to a node fj is allowed just in case fi Ši fj . A path that starts with a fact f represents an explanatory route to f . We can formally describe these paths as sequences of facts: a sequence describes such a path just in case the sequence contains all and only those facts visited by the path, in the order in which they’re visited. I’ll call a sequence that describes one of these paths an explanatory sequence, and when the first element of such a sequence is f , I’ll call that sequence an f - 6 Fine (2012a: 50) gives a similar argument for the claim that full ground can’t be defined in terms of partial ground. But his problem case is a case of what I’ve called explanatory overdetermination: he relies on the possibility of a single truth’s having two distinct full grounds each of which is entirely explanatorily unrelated to the other. The case I’ve presented here demonstrates that, even if we don’t suppose there are cases of explanatory overdetermination, we can still show that the purely-in-virtue-of relations can’t be characterized in terms of the at-least-partly-in-virtue-of relations. 69 sequence. That is, if Σ is the set of explanatory sequences and Σf is the set of f -sequences, Σ =df {⟨fi ⟩1≤i≤n ∶ n > 1 ∧ ∀x(1 < x ≤ n → fx−1 Ši fx )} Σf =df {⟨fi ⟩1≤i≤n ∶ ⟨fi ⟩1≤i≤n ∈ Σ ∧ f1 = f } Note that, on these definitions, an explanatory sequence must have at least two elements. With this new notion, we can easily distinguish cases where h plays a role in explaining f only via its role in explaining g—or, as I’ll sometimes say, where f Š h only via g—from those where h has an additional role to play in explaining f , as follows. A fact h plays a role in explaining f only via its role in explaining g just in case the explanatory routes from h to f all go through g. Or, more formally, f Š h only via g ≡df f Š h ∧ g ≠ h ∧ g ≠ h ∧ ∀σ ∈ Σf (h ∈ σ → g ∈ σ) And we can generalize this condition to distinguish cases where h plays a role in explaining f only via the roles it plays in explaining the facts in some set of facts Γ from those where h has an additional role to play in explaining f : h plays a role in explaining f only via its roles in explaining the facts in Γ just in case the explanatory routes from h to f all go through some member of Γ. Or, more formally, f Š h only via Γ ≡df f Š h ∧ ∀σ ∈ Σf (h ∈ σ → ∃g ∈ Γ(g ≠ f ∧ g ≠ h ∧ g ∈ σ)) Note that Γ, on this definition, may contain extraneous elements. In particular, if any ∆ ⊆ Γ is such that f Š h only via ∆, then f Š h only via Γ as well. We could change our definition to rule out extraneous elements if we liked, but doing so would complicate our treatment unnecessarily. One more bit of new terminology is needed. When an initial segment of the path de- scribed by one explanatory sequence is identical to the whole path described by another, I’ll 70 say that the former sequence extends the latter: ⟨gi ⟩1≤i≤n+m extends ⟨fi ⟩1≤i≤n ≡df ⟨gi ⟩1≤i≤n+m ∈ Σ ∧ ∀x(1 ≤ x ≤ n → gx = fx ) (I’m leaving open the possibility that m = 0. As a result, there are trivial cases of extension: every explanatory sequence extends itself.) We can now characterize the purely-in-virtue-of relations. Consider first the single-fact version, and recall that, according to our rough construal, f Šp g just in case g fully explains f . What this comes to, plausibly, is that f Šp g just in case the following condition holds: first, for any h such that f Š h, either h = g, h Š g, or f Š h only via g; and, second, there is some such h—i.e., f isn’t fundamental. And this turns out to be equivalent to another condition: that f Š g and every f -sequence has an extension that contains g. Proof of the equivalence. First direction: Suppose that there’s some fact h such that f Š h and that, for any fact h such that f Š h, either h = g, h Š g, or f Š h only via g. By definition, any f -sequence has, as its second element, some fact d such that f Ši d. So, to show that any f -sequence has an extension that contains g, it’s sufficient to show that any f - sequence containing some such d has an extension that contains g. So let d be any fact such that f Ši d. Then, by definition, f Š d, and so, by supposition, either d = g, d Š g, or f Š d only via g. In the latter case, g ≠ f ∧ g ≠ d, and every f -sequence σ that contains d must also contain g. But that means this latter case is impossible: since f Ši d, ⟨f, d⟩ is an f -sequence. And in the other two cases, it’s immediate that any f -sequence containing d has an extension that contains g. So any f -sequence has an extension that contains g. Furthermore, there’s at least one f -sequence, since there’s some h such that f Š h. So there’s an extension of that f -sequence that contains g, which means there’s at least one f -sequence that contains g. So f Š g. Second direction: Suppose that f Š g and that every f -sequence has an extension con- taining g. Since g is a fact such that f Š g, there certainly exists at least one such fact. Fur- 71 thermore, g ≠ f , by irreflexivity. Now, let h be any fact such that f Š h. Then there’s at least one f -sequence that contains h. Let σ be any such f -sequence, and suppose g ≠ h and ¬(h Š g). Since, by supposition, some extension of σ contains g, σ itself must contain g, since ¬(h Š g). So ∀σ ∈ Σf (h ∈ σ → g ∈ σ). And since we’ve already determined that g ≠ f and we’ve supposed that g ≠ h, we also have g ≠ f ∧ g ≠ h ∧ ∀σ ∈ Σf (h ∈ σ → g ∈ σ). So, for any h such that f Š h, if g ≠ h and ¬(h Š g), then g ≠ f ∧ g ≠ h ∧ ∀σ ∈ Σf (h ∈ σ → g ∈ σ). Or, equivalently, for any h such that f Š h, either h = g, h Š g, or f Š h only via g. ◻ What this means is that we can characterize the single-fact version of the purely-in- virtue-of relation as follows: f Šp g ≡df f Š g ∧ ∀σ ∈ Σf ∃τ (τ extends σ ∧ g ∈ τ ) Notice that it follows trivially from this characterization that the single-fact purely-in-virtue- of relation is irreflexive and asymmetric. Proof of the irreflexivity of f Šp g. If f Šp f , then f Š f . But that’s been proved impossi- ble. ◻ Proof of the asymmetry of f Šp g. If f Šp g ∧ g Šp f , then f Š g and g Š f . But that’s been proved impossible. ◻ And we can also show that the single-fact purely-in-virtue-of relation is transitive. Proof of the transitivity of f Šp g. Suppose f Šp g and g Šp h. Then f Š g and g Š h, which means f Š h, by transitivity. Furthermore, every f -sequence has an extension that contains g and every g-sequence has an extension that contains h. Suppose σ is an f -sequence. Then it has some extension containing g. Let σ ′ be the sequence we get by re- moving g and every element after g from that extension. Now, let τ be a g-sequence. (We know there’s such a sequence, since g Š h.) Then τ has an extension—call it τ ′ —that con- 72 tains h. The sequence we get by concatenating σ ′ and τ ′ is, by construction, an extension of σ that contains h. So σ has such an extension. So f Šp h. ◻ We can characterize the set version of the purely-in-virtue-of relation in a similar way: f Šp Γ just in case f Š Γ and every f -sequence has an extension that contains some member of Γ. Or, more formally, f Šp Γ ≡df f Š Γ ∧ ∀σ ∈ Σf ∃τ ∃g(τ extends σ ∧ g ∈ τ ∧ g ∈ Γ) Note that, on this characterization, Γ need not be a minimal set of facts such that every f - sequence has an extension containing some member of Γ. That is, it may be that f Šp Γ even if there’s some ∆ ⊂ Γ such that every f -sequence has an extension containing some member of ∆. Or, equivalently, it may be that f Šp Γ, ∆ ⊂ Γ, and f Šp ∆. (It follows immediately from the No Overdetermination Thesis in §a3 below that, for this to happen, every g ∈ Γ such that g ∉ ∆ must be explanatorily redundant, in the sense that there are some facts in ∆ that can play g’s role(s) in explaining f . That is, either g Šp Λ, for some Λ ⊆ ∆, or f Š g only via ∆.) We could revise our characterization to rule out this sort of redundancy if we liked, but doing so would again complicate our treatment unnecessarily. Again, questions about irreflexivity, asymmetry, and transitivity are inapt here, given that the two relata of the set version of the purely-in-virtue-of relation are of different types. But it follows trivially from our characterization that this relation is strongly irreflexive and strongly asymmetric. Proof of the strong irreflexivity of f Šp Γ. Suppose f Šp Γ ∧ f ∈ Γ. Then f Š Γ ∧ f ∈ Γ. But that’s been proved impossible. ◻ Proof of the strong asymmetry of f Šp Γ. Suppose f Šp Γ ∧ g ∈ Γ ∧ g Šp ∆ ∧ f ∈ ∆. Then f Š Γ ∧ g ∈ Γ ∧ g Šp ∆ ∧ f ∈ ∆. But that’s been proved impossible. ◻ We can also show that the set version of the purely-in-virtue-of relation is strongly transitive. 73 Proof of the strong transitivity of f Šp Γ. Suppose f Šp Γ ∧ g ∈ Γ ∧ g Šp ∆. Then f Š Γ and g Š ∆, which means, by strong transitivity, f Š (Γ − {g}) ∪ ∆. Furthermore, every f -sequence has an extension that contains some member of Γ, and every g-sequence has an extension that contains some member of ∆. Suppose σ is some f -sequence. Then σ has an extension containing some h ∈ Γ. If h ≠ g, then h ∈ Γ − {g}, which means h ∈ (Γ − {g}) ∪ ∆. So σ has an extension that contains some element of (Γ − {g}) ∪ ∆. Suppose, then, that h = g. Then σ has an extension containing g. Let σ ′ be the sequence we get by removing g and every element after g from that extension. Now, let τ be a g-sequence. (We know there’s such a sequence—∆, by definition, must be nonempty, and so there’s some d such that g Š d.) Then τ has an extension—call it τ ′ —that contains some member of ∆. The sequence we get by concatenating σ ′ and τ ′ is, by construction, an extension of σ that contains some element of ∆, which means, a fortiori, that it’s an extension of σ that contains some element of (Γ−{g})∪∆. So any any f -sequence has an extension containing a member of (Γ − {g}) ∪ ∆. So f Šp (Γ − {g}) ∪ ∆. ◻ a3 Proving our claims In the foregoing discussion of the objection from worldly fact I flagged three claims on which I was relying and noted that they could be shown to be true in the logical system developed in this appendix. In this section I show that those claims are indeed provable given the above characterizations of the in-virtue-of relations. First up are the two assumptions about the logic of the in-virtue-of relations to which I appealed in §3 above, in my reconstruction of the objection from worldly fact. The first of these was the assumption that a fact plays a role in explaining another fact just in case the former plays a role in a full explanation of the latter. And the second was the assumption that there can be no explanatory overdetermination, in the following sense: when f Šp Γ and f Šp ∆, there can be no differences between Γ and ∆ except those that result from moving up and down the explanatory hierarchy—i.e., from replacing a fact g in one set with some 74 other fact(s) that can play g’s role(s) in explaining f . Here I give a precise statement and a proof of each of these assumptions. Full Explanation Thesis. For any f and g, f Š g just in case, for some Γ such that f Šp Γ, g ∈ Γ. Proof. First direction: Suppose f Š g, and let Γ = {h ∶ f Ši h}. Then ∀h ∈ Γ ∪ {g}(f Š h). So Γ ∪ {g} is a set such that g ∈ Γ ∪ {g} and f Š Γ ∪ {g}. Furthermore, every f -sequence contains some member of Γ ∪ {g}, since Γ ∪ {g}, by construction, contains every fact h such that f Ši h. So, a fortiori, every f -sequence has an extension containing some member of Γ ∪ {g}. So f Šp Γ ∪ {g}, since we’ve already determined that f Š Γ ∪ {g}. So Γ ∪ {g} is a set such that f Šp Γ ∪ {g} and g ∈ Γ ∪ {g}, which means there exists such a set. Second direction: Suppose there’s some Γ such that f Šp Γ and g ∈ Γ. Then f Š Γ, which means that f Š g. ◻ No Overdetermination Thesis. For any f , Γ, and ∆ such that f Šp Γ and f Šp ∆, if g is a fact such that g ∈ Γ, then either g ∈ ∆, f Š g only via ∆, or there’s some Λ ⊆ ∆ such that g Šp Λ. Proof. Suppose f Šp Γ, f Šp ∆, and g is some fact such that g ∈ Γ. Suppose further that g ∉ ∆ and that ¬(f Š g only via ∆). Since f Šp Γ, we know that f Š Γ, which means f Š g, since g ∈ Γ. So, since f Š g and ¬(f Š g only via ∆), we know, by definition, that there’s some σ ∈ Σf containing g such that no members of ∆, except possibly of f and g, are in σ. But f ∉ ∆, by strong irreflexivity, since f Šp ∆. And g ∉ ∆, by supposition. So no member of ∆ is in σ. Now, since f Šp ∆, every f -sequence has an extension containing some member of ∆. So let σ − be the sequence we get by removing g and every element after g from σ, and let τ be any g-sequence. Then the sequence we get by concatenating σ − and τ is an f -sequence, which means it has an extension containing some member of ∆. But that means τ itself 75 has an extension containing some member of ∆, since no member of ∆ is in σ − . Every g- sequence, then, has an extension containing some h ∈ ∆. And that means every g-sequence has an extension containing some h ∈ ∆ such that g Š h, since g ∉ ∆. So, if Λ = {h ∶ h ∈ ∆ ∧ g Š h}, then every g-sequence has an extension containing some member of Λ. Furthermore, σ, since it’s an f -sequence, has an extension—call it σ ′ —containing some member of ∆. But since no member of ∆ is in σ, the member of σ ′ that’s in ∆ must appear after g in σ ′ . That means there’s at least one h such that h ∈ ∆ and g Š h. And Λ, by con- struction, contains every such h. So Λ is a nonempty set such that ∀h(h ∈ Λ → g Š h), which means g Š Λ. Putting these results together: g Š Λ and every g-sequence has an extension containing some member of Λ. So g Šp Λ. And that means Λ is a set such that Λ ⊆ ∆ and g Šp Λ, in which case there certainly exists some such set. So, for any f , Γ, and ∆ such that f Šp Γ and f Šp ∆, if g is a fact such that g ∈ Γ, then if g ∉ ∆ and ¬(f Š g only via ∆), there’s some Λ ⊆ ∆ such that g Šp Λ. Or, equivalently, for any f , Γ, and ∆ such that f Šp Γ and f Šp ∆, if g is a fact such that g ∈ Γ, then either g ∉ ∆, f Š g only via ∆, or there’s some Λ ⊆ ∆ such that g Šp Λ. ◻ The final claim to be proved in this section takes a bit of setting up. In §3 above I recon- structed the objection from worldly fact and showed that, on my reconstruction, the argu- ment is valid, and then in §4 I claimed that the argument remains valid even if we revise the reconstruction by replacing one of the premises—i.e., (4)—with something strictly weaker. Here I reproduce the revised reconstruction, this time using the notation introduced in this appendix, and then I prove that the argument, on that reconstruction, is indeed valid. Recall: v is the fact that all vixens are foxes, and C is the set of all facts about what linguistic conventions are in place. So if we let s be the fact that the sentence “All vixens are foxes” is true, our revised reconstruction is as follows: (1) v ∉ C (2) ∀Γ(Γ ⊆ C → ¬(v Šp Γ)) 76 (3) s Š v (4− ) ∀f (f ∈ C → ¬(s Š f ∧ f Š v)) ∴ (C) ∀Γ(Γ ⊆ C → ¬(s Šp Γ)) What we’ve got to prove, then, is just that this argument is valid. Replacement Validity Thesis. (C) follows from (1), (2), (3), and (4− ). Proof. Suppose (1), (2), (3), and (4− ) are all true, and let Γ be any set such that s Šp Γ. By (3), s Š v, which means, by the Full Explanation Thesis, that there’s some ∆ such that s Šp ∆ and v ∈ ∆. So, by the No Overdetermination Thesis, either v ∈ Γ, s Š v only via Γ, or there’s some Λ ⊆ Γ such that v Šp Λ. We proceed by cases. If v ∈ Γ, then Γ ⊈ C, since, by (1), v ∉ C. If s Š v only via Γ, then s Š v and every s-sequence that contains v also contains some f such that f ≠ s, f ≠ v, and f ∈ Γ. So, a fortiori, every s-sequence whose final element is v contains some f such that f ≠ s, f ≠ v, and f ∈ Γ. That is, for any σ ∈ Σs whose final element is v, some f ∈ Γ appears between s and v in σ, which is to say that s Š f ∧ f Š v. Furthermore, there’s at least one such σ, since s Š v. So there’s at least one f ∈ Γ such that s Š f ∧ f Š v. This f , though, can’t be in C, since, by (4− ), f ∈ C → ¬(s Š f ∧ f Š v). So Γ ⊈ C. If there’s some Λ ⊆ Γ such that v Šp Λ, then Λ ⊈ C, since , by (2), Λ ⊆ C → ¬(v Šp Λ). So Γ ⊈ C, since Λ ⊆ Γ. No matter which case we’re in, then, Γ ⊈ C. So, for any Γ such that s Šp Γ, Γ ⊈ C. Or, equivalently, ∀Γ(Γ ⊆ C → ¬(s Šp Γ)), which is just to say that (C) is true. ◻ 77 Chapter 3 How to Be a Conventionalist Inference, Truth, and Error One must seek the truth within—not without. —Hercule Poirot 1 Finding a metasemantics Our task in this chapter is to provide a respectable metasemantic theory according to which conventionalism is true. And we’ve just seen that, for conventionalism to be true, it must be the case that facts about our linguistic conventions fully explain, in some nonpragmatic and noncausal way, certain facts about the world, such as the fact that all vixens are foxes. What we need to provide, then, is a theory that implies that our conventions have the power to (in some sense) make the world one way rather than another. This seems a tall order, especially if approached directly. After all, we were in Chapter 2 unable even to work out exactly what sort of explanatory relation is needed here (though we did manage some degree of clarification). And that means we’re not yet entirely sure what we’re looking for. Our task, then, seems impossible: how are we to go about constructing a theory that meets our requirements if we’re unable even to say what those requirements are? It turns out, though, that we can avoid this difficulty by taking a more roundabout ap- proach. It’s straightforward to show, after all, that conventionalists are committed to certain metasemantic doctrines that are already (relatively) well understood—we can be sure, to 78 take just one example, that they’re committed to some species of the sort of use-based under- standing of meaning endorsed by Ludwig Wittgenstein (1958), since linguistic conventions are just conventions about what to do with particular expressions in various situations. So, rather than constructing a new theory from the ground up and hoping it has whatever fea- tures turn out to be required, we can start with a simpler task: we can examine in detail these metasemantic doctrines, paying special attention to their implications for conventionalists’ epistemological project, for the purpose of working out just what needs to be added to these doctrines if conventionalism is to do the epistemological work it’s intended to do. The result will be a firmer understanding of exactly what a conventionalist metasemantic theory will have to look like. This strategy would be attractive even if its benefits were minimal—at the very least, it gives us a place to start, which we didn’t have before. As we’ll see, though, the strategy turns out to pay enormous dividends. In fact, examination of two doctrines to which con- ventionalists are committed—the aforementioned use theory of meaning along with a par- ticular kind of deflationary understanding of truth (and other truth-theoretic notions such as denotation)—reveals that nothing needs to be added to them: on the assumption that both of these doctrines are true, it’s guaranteed that linguistic conventions will be able to play the epistemological role conventionalists need them to play. Or so I’ll argue. And if I’m right, then these doctrines together entail that conventions have the power to make the world one way rather than another—one of the lessons of Chapters 1 and 2, after all, was that conven- tions can play the epistemological role required of them only if they have this power. Taken together, then, these two doctrines just are a conventionalist metasemantic theory. In addition, both doctrines are respectable; a look at the literature reveals that each is a mainstream view with a fair share of defenders. Furthermore, their conjunction is respectable. In fact, the two doctrines are natural bedfellows—deflationists tend to be use theorists and vice versa (though there are exceptions, of course). If all that is right, then no further theory construction is needed: in conjoining these two 79 doctrines, we’ve already provided a respectable conventionalist metasemantic theory. Now, I want to be clear here: when I say that our two doctrines are respectable, my claim is not that their conjunction is something everyone should accept. I haven’t attempted here to give conclusive reason to accept these doctrines. In fact, I haven’t directly argued for them at all, nor will I. Our project here, remember, is largely defensive: the goal is just to show that, contrary to popular opinion, conventionalism remains a live option. (Recall that our conclusion in Chapter 1 was that, if conventionalism remains a live option, it’s worth pur- suing simply because there isn’t any other promising strategy for vindicating our apparently a priori beliefs. My argument for conventionalism goes via this claim—it doesn’t go via any direct argument that use theory and deflationism are independently more plausible than their rivals.) So the fact that our two doctrines are seen as at least worthy of serious investi- gation is enough for our purposes; we don’t need to rely on the additional premise that these doctrines are in fact correct. At any rate, the result here—that we can get a version of conventionalism just by con- joining these two respectable metasemantic doctrines—is surprising, to say the least. So it’s worth working carefully through the reasoning leading to that result. My ultimate goal in this chapter, then, will be to show that any theory on which our two doctrines are true is thereby a theory on which linguistic conventions can play the epistemological role conventionalists need them to play. To begin, though, we need to make precise what our two doctrines are and why conven- tionalists are committed to them—both use theory and deflationism, after all, come in quite a few flavors, and it will be important, as we discuss the implications of these doctrines, to have a clear picture of just which versions of them are on the table. 2 Conventionalists’ inferential role semantics As suggested above, the basic reason conventionalists are committed to a use theory is sim- ple: linguistic conventions are just conventions governing how to operate with particular 80 linguistic expressions in various situations. But we can be more specific than this. The par- ticular conventions that are relevant for our purpose are conventions for sentence accep- tance—what’s required for a sentence to be true by convention, on any plausible version of conventionalism, is for speakers to accept that sentence as a matter of (implicit or explicit) linguistic convention. Conventionalists, then, are committed to the claim that the conven- tions governing when to accept a given sentence can somehow fix that sentence’s truth con- ditions. So conventionalists need a particular kind of use theory, one on which a sentence’s truth-theoretic properties (i.e., its truth conditions along with associated properties such as the denotations of the expressions contained in it) are determined by the conventions gov- erning that sentence’s use in reasoning. That is, conventionalists need a particular sort of inferential role semantics.1 Of course, if the conventions governing how sentences are to be used in inference are to have a role to play in fixing those sentences’ truth-theoretic properties, it must be possi- ble for there to be inferences involving sentences whose truth-theoretic properties haven’t antecedently been fixed. And that means a notion of inference is needed that’s syntactic: an inference must be understood as a transition among items (i.e., sentences) individuated not by their contents but by their forms.2 I also want to emphasize that the relevant conventions here are normative (as, plausibly, all conventions are)—they can be understood as rules of inference, rules to which speak- ers must conform if they’re to be using the language correctly. To infer—here I borrow the Wittgensteinian figure—is to move among positions in a language game, and the conven- tions are the rules of that game. These rules govern the transitions among sentences: each 1 There are lots of different flavors of inferential role semantics, but we won’t be surveying them here—we’ll restrict our attention to the sort needed by conventionalists. (For a helpful overview of the full menu of options, see Julien Murzi and Florian Steinberger 2017: §1.) 2 It has been suggested—by, e.g., Paul Boghossian (2014)—that inferences are operations on contents, not symbols, and that it follows immediately from this that inferential role semantics (as usually presented) is based on a confusion. This may or may not be correct, but even if it is, it doesn’t cause any problems for conventionalism. Conventionalists need not insist on calling the relevant syntactic transitions inferences, after all; what’s required is just that such transitions are possible, which they clearly are. (I’ll continue to describe these transitions as inferences; readers unhappy with this choice may regard my use of the terminology as stipulative.) 81 rule, we can suppose, directs the speaker to accept a sentence of a particular form if she already accepts some other sentences of a particular form. (Of course, the attitude of suppo- sition must also be accommodated—speakers will need to be able to trace out the implica- tions even of sentences they haven’t genuinely accepted, and so they’ll need to use the rules even when they’re engaged in suppositional reasoning.3 ) And these rules will have to be rules of obligation, not merely rules of permission—for a sentence to be true by convention, speakers of the language must be, in some sense, committed to accepting that sentence.4 A convenient feature of conventions with these properties is that they can be represented in a natural deduction calculus.5 Consider, for instance, the following rules governing infer- ences essentially involving the connective and: A B and-I A and B and-E1 A and B and-E2 A and B A B The first of these directs speakers to accept a sentence of the form ⌜A and B⌝ when they accept A and B. Rules of this kind, that tell us in what circumstances to accept a sentence containing a particular word—i.e., rules in which the word in question appears in the sen- tence in the conclusion position—are introduction rules, or I-rules, for that word. And the other two rules here direct speakers to accept both A and B when they accept a sentence of the form ⌜A and B⌝. Rules of this kind, that tell us what to do when we accept a sentence containing a particular word—i.e., rules in which the word in question appears in a sentence in the premise position—are elimination rules, or E-rules, for that word. (Notice that, given these characterizations of introduction and elimination rules, it’s possible for a single rule to 3 Other attitudes may need to be accommodated as well: if, for instance, rejection of a sentence is correctly understood not as acceptance of its negation but instead as a distinct attitude on a par with acceptance—as suggested by, e.g., Timothy Smiley (1996), Ian Rumfitt (2000), and Greg Restall (2005)—then the rules will have to govern sentence rejection as well as sentence acceptance. 4 There isn’t space here for a full account of just what this commitment comes to, but I want to acknowledge that any such account will have to deal with various niceties—as Gilbert Harman (1986) has noted, for instance, it would be neither possible nor desirable to explicitly accept every implication of things we already accept, and so conventionalists had better not claim that we’re required to do so. 5 This is no accident: Gerhard Gentzen’s work on natural deduction (see his 1934/1964) is where the idea of inference rules as implicit definitions comes from in the first place. 82 count as both an I-rule and an E-rule for the same word.) Plausibly, our conventions for rea- soning with the word and are given by these three basic introduction and elimination rules.6 (We may well reason in accordance with other rules for and, of course, but these three, plau- sibly, are the basic ones, with the others being derived from them.) In general, we’ll call the conventions governing our reasoning with a given word the meaning-conferring rules for that word. It has often been pointed out that an inferential role semantics of roughly this sort seems reasonable as an approach to the logical connectives in particular.7 But conventionalists don’t in general want to limit truth by convention to logical truths, and so they need an inferential role semantics that’s more general than this. It’s not particularly difficult to be- gin to see how such a theory will work. Inferences essentially involving the nonlogical word vixen, for instance, plausibly are governed by the following rules: x is female x is a fox vixen-I x is a vixen vixen-E1 x is a vixen vixen-E2 x is a vixen x is female x is a fox And the rules governing inferences involving other nonlogical words will work in a broadly similar way. For conventionalists, then, the truth-theoretic properties of any word of our language are going to be fixed by the conventions governing how we reason with that word—i.e., by the word’s basic introduction and elimination rules. So a language (or, at least, the part of the language that’s used for making truth-apt claims) can be specified by just three sets: a set of words, each of which is a symbolic form (in a written language, a sequence of letters) plus membership in exactly one grammatical category; a set of grammatical rules that together 6 It’s not clear that, in reality, our conventions for reasoning with and are as simple as this. But that’s not a problem: all that’s needed here is a toy model of the way language actually works. As long as there are conven- tional rules of inference that function something like natural deduction rules, it’s not going to matter, for our purposes, how complicated those rules are. 7 Here, for instance, is noted opponent of inferential role semantics Jerry Fodor, explaining the reasoning (before going on to reject it): “Since [the logical constants] are, plausibly, not referring expressions, it might be that an account of the rules that determine their conceptual roles is the whole story about why they mean what they do and what it is to understand them” (2004: 40). 83 tell us how to construct well-formed sentences by combining members of the different gram- matical categories; and a proof system, a set of inference rules consisting of the I-rules and E-rules for the words in the language.8 But there’s a caveat. While this three-set specification includes everything one would need to know in order to use the language specified, there plausibly are facts about the lan- guage that can’t be captured by such a specification. Consider, for instance, vixen-I, the above introduction rule for vixen. Both female and fox appear in sentences in that rule’s premise position, and so, according to our characterization of an elimination rule, vixen-I counts as an elimination rule for both female and fox. Plausibly, though, this rule, while it’s meaning-conferring for vixen, is not meaning-conferring for either female or fox— presumably, the truth-theoretic properties of female aren’t affected by whether the word vixen is in the language. But nothing in the three sets that make up our specification can capture this asymmetry. Cesare Cozzo (1994) provides a solution: we can capture the asymmetry by including in our specification a transitive “relation of presupposition” whose relata are words in the language. This relation allows us to include in our specification the information that (e.g.) vixen presupposes (i.e., depends on) female and fox but that neither female nor fox presupposes vixen. And we can use this information to specify which of the language’s rules are meaning-conferring for a given word, as follows: for any word w, a rule in which w appears is meaning-conferring for w just in case every word that appears in that rule is either w itself or presupposed by w. This gives us the right result—vixen-I turns out to be meaning-conferring for vixen but not for female or fox. (The availability of this sort of presupposition relation is important for reasons I’ll discuss in Chapter 4 below.) Before we move on, one last thing needs to be discussed: the connection, on this view, between our language and the extralinguistic world. We have so far discussed only transi- tions among sentences, but an adequate specification of the truth-theoretic properties of our 8 Note that these two sets of rules correspond, respectively, to the “rules of formation” and the “rules of transformation” in Rudolf Carnap’s (1934/1937) system. 84 expressions will have to be more general than this. Our rules will have to govern, not only in- tralanguage moves, but also world-language moves and language-world moves—“language entry transitions” and “language departure transitions”, in the terminology introduced by Wilfrid Sellars (1954). But how are the rules governing world-language inferences and lan- guage-world inferences supposed to work? What sorts of nonlinguistic items can appear in these rules’ premise and conclusion positions?9 It’s clear enough that perceptual experience is important here: some sentences are ac- cepted not on the basis of other sentences one accepts but on the basis of perceptual experi- ence, and conversely, particular sorts of perceptual experience are sometimes expected on the basis of sentences one accepts. The question is what sorts of rules for moving between perceptual experiences and sentences may properly be regarded as conventional. Must the rules in question tell speakers only how to operate with perceptual experiences internalis- tically described? Or might they also be able, for instance, to direct a speaker to accept the sentence “That’s a fox” when she sees a particular sort of external object (i.e., a fox)?10 We can answer this question by thinking again about conventionalists’ epistemological project. What’s needed, recall, is an explanation of our near-perfect reliability in certain do- mains, and as we saw in Chapter 2, the explanation, according to conventionalists, is that there’s no way for our conventions to lead us into error—sentences we accept as a matter of convention are somehow guaranteed to be true. But notice: this explains our near-perfect reliability only if our ability to reason in accordance with our conventions is itself nearly per- fect. (A thinker who routinely fails to reason in accordance with her conventional rules of inference may be highly unreliable even if the rules themselves are guaranteed to lead to the truth.) In order for the conventionalist story to be adequate, then, our conventional rules 9 In order to emphasize that world-language moves and language-world moves are on a par with intralan- guage moves, I’ll be describing transitions of all three sorts as inferences. In addition, I’ll sometimes describe the nonlinguistic items that figure in these transitions as premises and conclusions. I recognize that I am abus- ing the language here. Again, readers unhappy with these choices may regard my use of this terminology as stipulative. 10 In Ned Block’s terminology: are inferential roles short-armed, in that they “stop at the skin in sense and effector organs”, or are they long-armed, in that they “reach out into the world” (1998: 654)? (For a defense of long-armed inferential roles, see Harman 1987/1999.) 85 must be such that there’s nothing mysterious about how we can reason in accordance with them without error. So, if a conventional rule is to direct us to accept a sentence in particular circumstances, we must be able to discriminate, with near-perfect accuracy, cases in which we’re really in those circumstances from cases in which we aren’t.11 What this suggests is that the perceptual experiences with which our conventions direct us to operate must, for conventionalists, be internalistically described: otherwise—if, say, a conventional rule could direct a speaker to accept “That is a fox” just when she saw a fox—we would be unable to determine with near-perfect accuracy whether we were really in the relevant circumstances, since we would be unable to rule out the possibility of hallucination or illusion. And in that case, even our rules’ being guaranteed to lead to the truth wouldn’t be sufficient to explain our reliability in the relevant domains, since we would have no explanation of our ability to reason in accordance with those rules without error. There surely is a great deal more to be said about the sort of inferential role semantics to which conventionalists are committed, but what I’ve said here will be enough for our purposes, at least for the moment. We can now move on to considering conventionalists’ commitment to a certain sort of deflationary understanding of truth and related notions. 3 Conventionalists’ deflationism According to the inferential role semantics just described, the truth-theoretic properties of our expressions are fixed by certain of our conventional rules: those governing inferential transitions among sentences (i.e., intralanguage transitions) as well as those governing tran- sitions between between sentences and (internalistically described) perceptual experiences (i.e., language entry and departure transitions). But this is just a thesis about how our expres- sions come to have the truth-theoretic properties they do—it doesn’t yet tell us anything about what it is for an expression to have the truth-theoretic properties it does. Our next 11 Note: this doesn’t entail that we must have any conscious thoughts whatsoever about whether we’re in the relevant circumstances. What’s required is just that we’re able to respond differently to cases in which we’re in the relevant circumstances than we do to cases in which we’re not. 86 question, then, is what conventionalists should say about what it is for a word to denote a particular thing, what it is for a sentence to be true, etc. Unsurprisingly, there are restrictions on what conventionalists may say here. And we can begin to see what those restrictions are by considering, for contrast, a theory that’s com- patible with our inferential role semantics but isn’t compatible with conventionalism—the view, a version of which was discussed in Chapter 2 above, that the way in which our con- ventional rules of inference for an expression fix that expression’s truth-theoretic properties is by picking out, from an antecedently existing pool of possible truth-theoretic properties, whatever ones the expression must have in order for the rules to be truth-preserving. For concreteness, let’s look at a particular version of this view, one on which what’s fixed in this way is, in the first instance, what a given expression denotes.12 On this version, what happens when we accept (for example) the word and—with its attendant I- and E-rules— into our language is that a denotation is picked out for and in such a way that these rules turn out to be truth-preserving. That is, a denotation is assigned to and in such a way that (i) if A is true and B is true, then ⌜A and B⌝ is true; and (ii) if ⌜A and B⌝ is true, then A is true; and (iii) if ⌜A and B⌝ is true, then B is true. (Of course, a denotation can be fixed in this way only if the world makes available some candidate denotation that would meet these conditions. In the case of and, the thought goes, there is such a denotation available: the conjunction function.) The view here is that denotation is fixed via something like a metaphysical version of a charity principle: when the semantic gods are handing out denotations, they must do their best to make it the case that, when speakers of the language infer in accordance with the lan- guage’s meaning-conferring rules, their inferences will be correct. What’s important about 12 This view is a linguistic analog of Christopher Peacocke’s (1992) theory of concepts—to possess a given concept, on Peacocke’s view, is (roughly) to find beliefs (or inferences) of particular forms “primitively com- pelling”, and a concept’s denotation is (roughly) whatever it needs to be for the these primitively compelling beliefs to be true (or for these primitively compelling inferences to be truth-preserving). 87 this procedure, for our purposes, is that the standard of correctness for inferences on which the gods are relying (i.e. the preservation of truth) is distinct from the standard of correct- ness discussed in the previous section—we noted there, recall, that linguistic conventions are normative, that they’re rules of inference to which speakers must conform in order to be using the language correctly. So there are now in play two distinct senses in which a given inference can be correct: it’s correct in the first sense just in case it’s sanctioned by our meaning-conferring rules, and it’s correct in the second sense just in case it’s truth- preserving. Call these two sorts of correctness rule-correctness and truth-correctness, respec- tively. The question I want to ask is what relationship there is between rule-correctness and truth-correctness. Now, the denotation-fixing view under discussion is supposed to tell us something sub- stantive, by appeal to truth-correctness, about how the denotations of our expressions are fixed. But it’s clear enough that, in order for the view to be anything more than a truism (i.e., in order for for there to be any work for the semantic gods to do), our theory of truth and related notions must be such that it doesn’t follow just from that theory that, when an inference is rule-correct, it’s also truth-correct—the question of whether the inference is truth-correct must remain open. To see what I mean here, consider how the denotation- fixing view interacts with the following toy theory of truth on which that question does not remain open: Toy Theory. A sentence is true if and only if it’s provable (from an empty premise set) via the language’s meaning-conferring rules. Given this theory of truth, we can easily show that all meaning-conferring rules are truth- preserving, so that any inference, if it’s rule-correct, will also be truth-correct. Here’s how this works for and-I: Suppose A is true and B is true. Then, by our Toy Theory, both A and B are provable (from an empty premise set) via the language’s meaning-conferring rules. 88 That is, we have proofs of both A and B, as follows: D E A B And we can append to these proofs an application of and-I to generate a proof of ⌜A and B⌝: D E A B A and B So, by our Toy Theory again, ⌜A and B⌝ is true. Discharging our supposition, then, we have that, if A is true and B is true, then ⌜A and B⌝ is true as well. So condition (i)—i.e., the fact that and-I is truth-preserving—is trivial, in the sense that no cooperation from the convention-independent world is required in order for that condition to be met. And we can easily show something similar for (ii) and (iii). In fact, we can easily show any meaning- conferring rule to be truth-preserving using the same strategy. The Toy Theory, after all, entails that any sentence that can be derived (via our meaning-conferring rules) from true premises is itself true: From the fact that the premises are true, it follows, by the Toy Theory, that there are proofs of all of them, which means we can generate a proof of the sentence in question by appending to those proofs our derivation of that sentence from the premises. So the sentence in question is provable, which means it’s true, by the Toy Theory again. And it follows immediately from this that all meaning-conferring rules are truth-preserving: the conclusion of a meaning-conferring rule can be derived from the premises in one step, via the rule itself. The Toy Theory, then, reduces the denotation-fixing view to a truism: the meaning-conferring rules for (e.g.) and are truth-preserving regardless of what denotation is assigned to and—indeed, regardless of whether and even has a denotation—which means that, if the semantic gods’ job is to ensure that rule-correct inferences will also be truth- correct, there’s nothing left for them to do. What this tells us is that the denotation-fixing view, insofar as it has any substance, car- 89 ries with it a commitment to some theory other than our Toy Theory—there must remain some work for the semantic gods to do. The significance of this result may be difficult to see at first, since it’s clear that the Toy Theory is not to be taken seriously as a theory of truth any- way: it fails to deliver various platitudes about truth. For example, it’s fails to deliver certain (nonparadoxical, grounded) instances of the T-schema—i.e., “S” is true if and only if S— and so fails to be materially adequate.13 If a sentence is true if and only if it’s provable, after all, it immediately follows that any sentence that can’t be proved is not true, even if it can be accepted on other grounds. (Consider, for example, a sentence with empirical content, such as “There are vixens”. Because the fact that there are vixens requires empirical confirmation, that sentence is, according to the Toy Theory, not true—there’s no way of proving the sen- tence from an empty premise set via our meaning-conferring rules, since we’ll always need to have perceptual experiences operating as premises.) Note, though, that the Toy Theory is a biconditional, separable into the following two claims: (a) If S is true, S is provable. (b) If S is provable, S is true. And the source of the theory’s material inadequacy is the left-to-right claim (a). Further- more, while our above demonstration that the Toy Theory reduces the denotation-fixing view to a truism does rely on (a), that claim’s only purpose in the demonstration is to allow us to move from the original right-to-left claim (b) to the following slightly strengthened version of that right-to-left claim: (b+ ) If S is derivable from true premises, S is true. This is the crucial entailment of Toy Theory, the one in virtue of which that theory trivi- alizes the denotation-fixing view: since there will always be a one-step derivation from the premises of a meaning-conferring rule to its conclusion—that is, the derivation that consists of a single application of that very rule—(b+ ) ensures that any meaning-conferring rule will 13 See Alfred Tarski 1944 for discussion. 90 be truth-preserving. (In fact, (b+ ) is just equivalent to this latter thesis.14 ) But if that’s right, then any theory that entails (b+ ) will trivialize the denotation-fixing view, even if it doesn’t also entail (a) and so doesn’t carry along with it the problematic commitments of the Toy Theory. So, there’s no bar in principle to a theory that is materially adequate (and able to de- liver all other platitudes about truth) but that nevertheless turns the denotation-fixing view into a truism. Our above result, then, can be generalized: it turns out that the denotation- fixing view, insofar as it has any substance, rules out not only our Toy Theory but any theory that entails (b+ ), which means it may well rule out theories that are otherwise unproblem- atic. To see what all this has to do with our larger discussion here, note that it’s in virtue of the very commitment just described that the denotation-fixing view (again, insofar as it has any substance) is incompatible with conventionalism. Conventionalists, recall, are in need of a theory on which it’s guaranteed that our linguistic conventions won’t lead us into error. And the reason the denotation-fixing view can offer no such guarantee is precisely that, on that view, it’s not trivial that rule-correct inferences will also be truth-correct. In order for a rule- correct inference to turn out to be truth-correct, on the denotation-fixing view, additional work must be done: connections of the right sort must be forged between the convention- independent world and the word whose meaning is conferred by the rule in question. And it can be guaranteed that this work will be completed successfully only if it’s guaranteed that the convention-independent wor ld will make available for that word a candidate de- notation with the right features. But there’s no naturalist-friendly way of explaining how we could possibly have an a priori guarantee that the convention-independent world will always make available a candidate denotation of the needed sort. So, if conventionalism is to do the epistemological work it’s intended to do, conventionalists need a theory on which 14 I’ve already explained why (b+ ) entails that all meaning-conferring rules are truth-preserving. To see why the entailment runs in the opposite direction as well, note that any derivation is composed of steps each of which is an application of a meaning-conferring rule. By the thesis that all meaning-conferring rules are truth- preserving, then, each step is truth-preserving, which means the derivation as a whole is truth-preserving. So, if the premises of a derivation are true, then the conclusion is true as well. 91 no such guarantee of worldly cooperation is required in order for it to be guaranteed that rule-correct inferences will also be truth-correct. That is, they need a theory that’s like our Toy Theory in entailing (b+ ).15 The contrast between conventionalism and the denotation-fixing view, then, can be put as follows. The denotation-fixing view requires a theory on which what it is for an inference to be truth-correct (i.e., what it is for a sentence to be true—truth-correctness is, after all, defined in terms of the truth of sentences) is independent of what it is for an inference to be rule-correct: while rule-correctness is just a matter of what conventional rules are in place, the truth of a sentence is a matter of the convention-independent world being arranged in a particular way, and so work must be done to ensure that the class of rule-correct inferences will be a subclass of the class of truth-correct inferences. What conventionalism requires, though, is just the opposite: a theory on which, as a matter of what it is for a sentence to be true, it’s already guaranteed that all rule-correct inferences will be truth-correct, so that no such work needs to be done. That is, conventionalists are committed to some theory of what it is for a sentence to be true from which it just follows that, whatever the convention- independent world is like, our meaning-conferring rules won’t lead us into error. And that means they need a theory on which truth is not an independent sort of correctness—if it were, there would be no way to rule out the possibility of a sentence failing to be true despite being correct in the sense of being provable via our meaning-conferring rules. Instead, those rules must themselves be the ultimate arbiters. Truth must be derivative of the rules—for the rules to require a sentence’s acceptance must be sufficient (whatever the convention- independent world is like) for that sentence to be true. At this point we’ve arrived, finally, at conventionalists’ commitment to a deflationary understanding of truth and related notions. As it turns out, the claim that truth is not an independent sort of correctness entails what Brandom (2002: 117) calls “global explanatory 15 Or, to put the point more simply: conventionalism is incompatible with the denotation-fixing view because that view carries with it a commitment to a theory on which the truth of any given sentence (even, say, a sentence of logic) requires cooperation from the convention-independent world, while a sentence that’s true by convention, as we saw in Chapter 2, must be one whose truth does not require any such cooperation. 92 deflationism” about truth—i.e., it entails that “the notion of truth, and hence of truth condi- tions, [are not to be treated] as explanatory raw materials suitable for use in explaining what it is for a sentence to mean something”. To see why, note that an account of what it is for a (declarative) sentence to mean something will have to be, at least in part, an account what it is for a sentence to be true—a sentence’s content, after all, is whatever it is that a thinker commits herself to when she accepts that sentence, and it’s a truism that to accept a sentence is to take it to be true, to commit oneself to its truth. So, on any account of sentential truth on which truth isn’t an independent sort of correctness but is derivative of our conventional rules of inference, the notion of truth can’t be be a “raw material” in an account of what it is for a sentence to mean something—meaning is instead ultimately to be accounted for by appeal to the notion of a conventional rule of inference (or perhaps to the notions in terms of which this latter notion is itself to be understood). This bit of argument, though relatively simple, stands in need of some clarification. I end this section, then, with a discussion of what it does and doesn’t show. First, it’s worth emphasizing, given the wide variety of theories that are considered fla- vors of deflationism, that our argument here doesn’t show conventionalists to be committed to any of the following theses: that “‘A’ is true” (or “It’s true that A”) is nothing more than a stylistic variant of “A” (see, e.g. Frank Ramsey 1927); that truth isn’t a genuine property, ei- ther because the word true doesn’t function grammatically as a predicate (see, e.g., Dorothy Grover, Joseph Camp, and Nuel Belnap 1975, Brandom 1994) or for any other reason; that our understanding of truth is exhausted by the T-schema (see, e.g., Hartry Field 1994), by some propositional analog (see, e.g., Paul Horwich 1998a, 1998b), or by some quantifica- tional generalization of the latter (see, e.g., Christopher Hill 2002, 2014); or that the only function of the word true is to allow us to endorse sentences we can’t assert directly, “e.g. be- cause we do not know exactly what they are, or because there are too many of them” (Michael Williams 2002: 147; see also, e.g., W. V. O. Quine 1970). What the argument shows is only that conventionalists are committed to a certain sort of deflationism about the theoretical 93 role of truth—i.e., to what Field (1994: 252) calls “a deflationist view about the role of truth conditions in meaning or content”.16 This thesis, though, still needs to be made more precise. What roles, exactly, is truth precluded from playing, on the sort of deflationism we’re interested in? Brandom’s talk of “raw materials” suggests that what’s precluded is just an account of meaning in which truth appears as a primitive, but this is far too weak—it fails to rule out even views that are paradig- matically nondeflationary (and that obviously aren’t available to conventionalists), such as the view that truth is to be understood in terms of some relation of correspondence be- tween our linguistic expressions and the convention-independent world.17 And at the other extreme, deflationists sometimes suggest that truth can play no part in an account of mean- ing (except, perhaps, in its role as a device for indirect endorsement of arbitrarily many sentences at once), but, while some deflationists are indeed committed to this thesis, our argument evidently can’t show conventionalists to be committed to anything so strong—it doesn’t rule out views on which truth is accounted for by appeal to conventional rules of inference and meaning is in turn accounted for by appeal to truth. What the argument can show is more limited but still striking: that conventionalists are committed to a view on which a full theory of what it is for a (declarative) sentence to mean something can be given just by appeal to conventional rules of inference, so that, insofar 16 Brandom (2002: 117) suggests that this sort of claim about truth’s (lack of) role in explaining meaning is the core deflationist thesis, what we “ought to mean by ‘deflationism’, when it is unqualified by an adjective”, and though there isn’t universal agreement about this suggestion (see, e.g., Matti Eklund 2010), it does enjoy some support: Field (1994: 272), for instance, understands the “deflationist/inflationist contrast…in terms of whether truth conditions play a role in semantics and the theory of content”, and Rebecca Kukla and Eric Winsberg (2015: 26) take deflationism to be “in the first instance” the claim that there isn’t any “explanatory role that the concept of truth and the truth-maker/truth-bearer relationship ought to play in philosophy and kindred disciplines”. (Similarly, Jeremy Wyatt (2016) argues that the claim that truth is non-explanatory is one of two theses either of which is sufficient for a kind of deflationism.) At any rate, some of the other deflationist theses we’ve mentioned entail our thesis about truth’s lack of explanatory role (and have been used to argue for it), but it’s not obvious that the latter entails any of the former. 17 John MacFarlane (2010: 85) makes a similar point, noting that the claim that truth isn’t primitive is “com- mon ground to all parties to the contemporary debate”—even opponents of inferential role semantics such as Fodor don’t “[take] representational notions as unexplained explainers” but instead “attempt to explain them in nonsemantic terms”. (Hence Field’s (1994: 253) observation that “if deflationism is to be at all interesting, it must claim not merely that what plays a central role in meaning and content not include truth conditions under that description, but that it not include anything that could plausibly constitute a reduction of truth conditions to other more physicalistic terms”.) 94 as truth can play a role in in such a theory, it can do so only because what it is for a sen- tence to be true can itself be accounted for just by appeal to conventional rules of inference. Or, more simply: conventionalists must embrace what’s sometimes called an inferentialist order of explanation—they must “construe referential relations in terms of inferential ones and so must “explain the representational dimension of semantic content” by appeal only to proprieties of inference (Brandom 1994: xvi).18 This commitment to an account of truth-theoretic notions in inferential terms—a com- mitment that is at least implicit in the views of most self-described deflationists and is explicit in the views of theorists working in the pragmatist tradition—is certainly deflationary in a broad sense.19 But it doesn’t rule out all theories of truth that self-described deflationists are concerned to rule out. These deflationists, after all, tend to insist that the right theory of truth is going to be language-internal, that all that we could need or want from a theory of truth is going to be given by an account of how we reason with the natural-language word true. (It’s usually posited further that our use of that word can in turn be explained just by appeal to our acceptance of the T-schema or some variant thereof, but there are exceptions here—see, e.g., Price 2003.20 ) And this sort of language-internal approach is natural from an inferentialist perspective—if our theory of truth is just given by the rules of inference for our natural-language word true, then it’s certainly the case that our account of what it is for a sentence to be true proceeds by appeal to our rules of inference. The approach, though, isn’t mandatory: inferentialism is also consistent with the view that the nature of truth isn’t captured by our everyday uses of the word true, that the truth predicate we’re re- ally interested in is best understood as an expression of a metalanguage used for theorizing about the object language. It’s just that this sort of metalinguistic approach, in order to be 18 In particular, they must embrace what Brandom calls strong inferentialism: the view that “broadly inferen- tial articulation [where this is taken to include the proprieties of language entry and departure in addition to the proprieties of intralanguage transition] is sufficient to determine conceptual content (including its repre- sentational dimension)” 2000: 219–220. 19 One contemporary pragmatist who shares this commitment is, of course, Brandom himself. Some others include Cheryl Misak (see, e.g., her 2007), Jaroslav Peregrin (see, e.g., his 2014), Huw Price (see, e.g., his 2003, 2013), and Williams (see, e.g., his 2002, 2006). 20 Misak 2007 is another exception, though Misak doesn’t call herself a deflationist. 95 compatible with an inferentialist order of explanation, must be combined with a theory on which truth is explicitly understood in terms of provability in our system of conventional rules of inference—i.e., a theory in the spirit of our Toy Theory above. (Of course, the Toy Theory itself is inadequate for reasons we’ve already discussed. A theory in its spirit, in order to be successful, would need to allow for the truth of empirical sentences that are justifiable despite not being provable and so would need to account for truth in terms of justification more generally rather than only in terms of proof. Michael Dummett’s justificationism (see, e.g., his 1991, 2002, 2004) has roughly this shape.21 ) Both the language-internal approach and the metalinguistic approach, then, seem to be compatible with an inferentialist order of explanation, which means that, if we’re just trying to find a conventionalist-friendly metasemantics, we need not choose between them. So we won’t.22 Our purpose in this section, after all, is not to evaluate any particular inferentialist theory. It’s simply to show that conventionalists are committed to a certain sort of deflation- ary view of the explanatory role of truth-theoretic notions. And we’ve done this. So we can now move on to showing that the tools we’ve developed this point are enough to generate a conventionalist metasemantic theory. 4 Inferentialists’ conventionalism Our goal here is to show that any theory that respects the sort of inferentialism to which we’ve shown conventionalists to be committed—i.e., the inferential role semantics we in- 21 Strictly speaking, we could also insist that the Toy Theory itself (or something in the neighborhood) is correct as an account of the truth of the sentences we’re interested in—i.e., sentences requiring a convention- alist treatment—but that the truth of other sentences (e.g., empirical ones) is to be be given a different sort of account altogether. (Defenders of views with roughly this shape include Michael Lynch (see, e.g., his 2001) and Crispin Wright (see, e.g., his 1992, 2001).) Given our purposes here, we can set aside questions relating to this sort of pluralism and direct our attention only to the truth of those sentences that require a conventionalist treatment. But I do want to register some doubt that a pluralist theory will, in the end, be workable for conven- tionalists: it’s hard to see how a plausible (compositionality-respecting) story can be told on which the truth of a nonempirical sentence like “All vixens are foxes” consists in something completely different than does the truth of empirical sentences like “All vixens weigh less than a ton” and “All mammals that are members of any species native to Iceland are foxes”. 22 At least, not officially. In the end, though, I’m not convinced the metalinguistic approach is workable at all: it’s not clear that it’s really possible to step outside of natural language in the way required for metalinguistic theorizing to proceed. But see Dummett 2002. 96 troduced in §2 above combined with the deflationary approach to truth-theoretic notions we’ve just described—just is a theory on which our conventions have the power to make the world one way rather than another.23 And our strategy, once again, is just to show that any theory that’s inferentialist in the relevant sense is one on which conventions can do the epistemological work conventionalism requires them to do: as we saw in Chapter 2, the only theories on which conventions can do this work are theories on which conventions do in- deed have the power to make the world one way rather than another, which means that to show that a theory is one on which conventions can do this work is thereby to show that it’s a conventionalist theory. (This sort of indirect argument is, I recognize, less than satisfying. But it will have to do.) Our task, then, is to show that, on any theory that respects the sort of inferentialism we’ve been describing, it’s guaranteed that our meaning-conferring rules won’t lead us into error— i.e., to show that all such theories entail (b+ ), the thesis that any sentence derivable from true premises (via our meaning-conferring rules) is itself true. And completing this task turns out, surprisingly, to be straightforward. In fact, we’ve already completed a significant part of it. As we’ve seen, the sort of inferentialism under discussion is compatible with either a language-internal theory of truth or a metalinguistic theory that’s like our Toy Theory in accounting for truth by explicit appeal to provability in our system of conventional rules. So, if we can show that theories of both sorts entail (b+ ), we’ll have completed our task. And we’ve already seen that theories of the latter sort entail (b+ ). After all, our demonstration in §3 that the Toy Theory renders the denotation-fixing view trivial just was a demonstration that the Toy Theory entails (b+ ). And it’s easy to see how to extend this result to theories that allow for methods of justification other than proof: on any theory on which inferring via our meaning-conferring rules is one method of transmitting the sort of justification relevant for truth, it will trivially be the case that, whenever the premises of a derivation are justified in 23 Inferentialism isn’t generally taken to be a conventionalist view, but there are exceptions: Murzi and Stein- berger (2017: 206), for instance, state explicitly, in their discussion of logical inferentialism, that it’s a kind of conventionalism. (Jared Warren (2015a, 2015c) suggests that inferentialism of the sort we’re interested in is not sufficient for conventionalism—I discuss this claim in Chapter 4 below.) 97 the relevant sense, the conclusion is also going to be justified in that sense. What’s left, then, is to show that language-internal theories entail (b+ ) as well. And this, too, can be shown by a relatively simple argument, as follows. On a language-internal ap- proach, our theory of truth is just given by the conventional rules governing how we reason with the word true. But we already know that such a theory, if it’s to be adequate, must deliver nondefective instances of the T-schema. This criterion tells us very little of substance about what our theory of truth must look like—whether we understand the word true by ap- peal to correspondence, coherence, or anything else, we’ll be able, if we’re careful, to design our rules in such a way that they can deliver the right instances of the T-schema. What the criterion does tell us, though, is that our theory, whatever it is, will have to be one on which inferences in accordance with the following rules are sanctioned (where A is grounded): A true-I “A” is true true-E “A” is true A (Of course, this is not to say that these rules must be basic—they may be derived rules. The point is only that, as a matter of the logic of the biconditional, the availability of inferences in accordance with these rules is equivalent to the availability of instances of the T-schema.) And this means that any adequate language-internal theory of truth is going to deliver (b+ ). After all, if we have a derivation of some sentence S from some other sentences S1 , . . . , Sn , we can easily generate a derivation of ⌜S is true⌝ from ⌜S1 is true⌝, . . . , ⌜Sn is true⌝: starting with ⌜S1 is true⌝, . . . , ⌜Sn is true⌝, we can move to S1 , . . . Sn by n applications of true-E, after which we can move to S via the derivation we already have and then move to ⌜S is true⌝ by an application of true-I. Given any derivation of a sentence S from some other sentences, then, we can use that very derivation, along with true-I and true-E, to reason from the assumption that the other sentences are true to the conclusion that S is true, and then we can, by discharging our assumption, arrive at the conclusion that, if the other sentences are true, S is true as well. So we’ve reasoned our way to the conclusion that (b+ ) is true: if S is 98 derivable from true premises, then S is true.24 This simple argument has given us exactly the result we wanted. But there’s a worry that we’ve missed the point. What’s really in question (so the objection might go) has little to do with the relationship between S and ⌜S is true⌝—our concern is instead with whether our derivation of S from the premises is itself correct. So the inferential moves licensed by the T- schema, it seems, are neither here nor there. Let’s suppose we’re wondering about the status of our belief that all vixens are foxes, arrived at via our meaning-conferring rules, as follows: [x is a vixen](1) by vixen-E2 x is a fox by if-I: (1) If x is a vixen, then x is a fox by all-I All vixens are foxes Given that the status of this belief is what’s at issue, it’s hard to see what help it can be to repeat our reasoning with the addition of a true-I inference: [x is a vixen](1) by vixen-E2 x is a fox by if-I: (1) If x is a vixen, then x is a fox by all-I All vixens are foxes by true-I “All vixens are foxes” is true What’s in question, after all, is whether our reasoning has led us astray. To answer simply by tacking on an additional inference to this very reasoning, it seems, is just to fail to understand what’s being asked. In short, the worry isn’t about whether the truth of the sentence “All vixens are foxes” really follows from all vixen’s being foxes; it’s about whether all vixens are really foxes in the first place.25 24 We’ve been operating with (b+ ), which is stated in terms of sentential truth, but I want to note that an analogous argument can be given the preservation of propositional truth. We just need to give rules for true as attributed to propositions rather than to sentences: A The proposition that A is true truep -I truep -E The proposition that A is true A With these rules, we can show that (e.g.) the proposition that all vixens are foxes is true. 25 Some worry of roughly this sort appears to be behind Horwich’s conviction that, even given his defla- 99 What are we to make of this objection? It does seem right that what needs to be shown has little to do with the status of inferences to and from ⌜S is true⌝. (We saw in Chapter 2 above, after all, that what conventionalists are trying to explain is the reliability of our object- level beliefs in certain domains, not just the reliability of our ascriptions of truth to sentences in those domains, and so inferences involving these ascriptions of truth don’t seem to be to the point.) But what does need to be shown? When we worry that the reasoning leading to S in the first place might be incorrect, what, exactly, are we worrying about? In other words: what would it be, in this context, for the reasoning leading to our acceptance of (e.g.) “All vixens are foxes” to have led us astray? The answer is, in a way, obvious: for the reasoning in question to have led us astray is just for the sentence “All vixens are foxes” to fail to be true. Remember, though, that we’re operating here with a language-internal theory of truth: all we’re trying to show at the mo- ment is that language-internal theories entail (b+ ). And if what’s at issue is whether “All vix- ens are foxes” is true, where the notion of truth that’s in play is language-internal, then the above proof of the truth of that sentence does show what needs to be shown. The question of whether the sentence is true, after all, can on a language-internal theory arise only within the language, and the above argument shows that we can answer this language-internal question just by reasoning according to the rules of the language itself. Or, to put the point another way: what the above argument shows is that, insofar as we’re able to understand the language well enough to ask the question at all, we’re also equipped to answer it. (And note that this applies regardless of whether, in asking the question, we make any explicit appeal to the no- tion of truth. The original bit of reasoning leading to “All vixens are foxes” shows that, if we tionism, meaning-constituting regularities of use can’t do substantive epistemological work. Here’s one recent expression of that conviction: There’s no valid route from the fact that [our accepting as true the sentence “The bachelors are the unmarried men” is meaning-constituting for the word bachelor] to the conclusion that it is true – and thereby, via the disquotational truth schema, to the further conclusion that bachelors are unmarried men. Granted, we cannot [accept that sentence in a meaning-constituting way] without being sure that bachelors are unmarried men. But such a conviction…could nonethe- less be false – our being absolutely certain that p doesn’t entail that p. (2013: 119) See also, e.g., his 1998a: chap. 6 and his 2005: chap. 6. 100 don’t make any mention of truth at all but instead just ask whether all vixens are really foxes, a competent speaker will be able to arrive at an affirmative answer by reasoning according to the rules of the language.26 ) The lesson here is that, given our assumption that the notion of truth we’re operating with is a language-internal notion, it’s trivial that all sentences acceptance of which is required by the rules of the language are true. So, if we’re looking for a guarantee that the inferences sanc- tioned by our meaning-conferring rules are correct, where correctness is truth preservation, then our argument for (b+ ) does indeed provide what we’re looking for. Here an analogy will be helpful. Consider the roles of a model-theoretic semantics and a proof system in standard presentations of, say, classical first-order logic. The semantics, which lays out the conditions under which the various sentences of the language are to count as true in a given model, is generally given priority—the expressions of the language are said to be defined model-theoretically. The proof system, then, is secondary: it’s a set of rules de- vised for the purpose of reasoning within the language, with the restriction that those rules must respect the semantics—they must allow a sentence A to be derived from a set of sen- tences Γ just in case the semantics is such that A is true in every model in which the sentences in Γ are true. On this picture, it may turn out that we’ve made a mistake in devising our in- ference rules, that there are models in which they fail to preserve truth. We demonstrate soundness and completeness, then, in order to show that we haven’t made any such mistake: the point is to justify the rules of the proof system by showing that they adequately reflect the model-theoretic semantics. On this way of proceeding, it’s a sensible question whether the rules of the proof system are correct, a question with a nontrivial answer. But it’s possible to approach our logic dif- ferently: we might give the proof system priority and let the model-theoretic semantics be secondary. If we do so, it will be a condition of adequacy for our model-theoretic seman- tics that the semantics be set up so as to deliver the result that all and only the inferences 26 To paraphrase my colleague, Miquel Miralbés del Pino, if the question is “Are all vixens foxes?”, there are only two sensible answers: “Yes” and “Huh?” 101 sanctioned by the rules of our proof system preserve truth in every model. And on this way of proceeding, there’s no need to justify the rules of our proof system, no chance that they might fail to be preserve truth in some model: they settle what’s correct, and if the model- theoretic semantics disagrees, that means only that the semantics itself needs to be revised. To demonstrate soundness and completeness, then, is on this picture to show that the seman- tics is adequate.27 An approach to language on which truth is language-internal is analogous to this latter approach to logic. The rules of the language settle whether (e.g.) all vixens are foxes, and our notion of truth, in order to be adequate, must be devised so as to give the right verdict here. And that means the worry that our meaning-conferring rules might lead us astray (in the sense of failing to be truth-preserving) can be answered trivially. To put the point somewhat impressionistically, the question of whether an inference is correct in the sense of being truth-preserving is a question that, on a language-internal approach to truth, is to be asked within our language, with all its rules in place, and so the correctness of our language can’t really be what’s in question: insofar as truth is language-internal, it’s going to be trivial that inferences sanctioned by our meaning-conferring rules are indeed truth-preserving. On a language-internal theory, then, as on a metalinguistic theory, we have a guarantee that inferences sanctioned by our language’s conventional rules of inference will be truth- preserving. This means that, on either sort of inferentialist theory, conventions can do the work of explaining, without appeal to facts about what the convention-independent world is like, how it is that we, just by being competent speakers of our language, manage to be reliable in certain nonempirical domains. And as we saw in Chapter 2, this can be so only if conventions have the power to make the world one way rather than another. So any theory that’s inferentialist in the sense under discussion here just is a conventionalist theory. There is, though, a further worry. We’ve seen that, according to the sort of inferential- ism developed here, a full theory of what it is for a declarative sentence to mean something 27 Dummett (1973/1978a) considers this way of thinking about soundness and completeness, but he doesn’t endorse it. 102 (where this includes a theory of what it is for that sentence to have the truth conditions it does) can be given just by appeal to conventional rules of inference—this fact is what un- derlies the above demonstration that, on an inferentialist approach, our language’s rules of inference can’t fail to be truth-preserving. But notice: even if a full theory of what it is for a sentence to mean something can indeed be given just by appeal to conventional rules, it doesn’t follow that every set of rules results in our sentences being legitimately meaningful. There may well be restrictions on what a set of rules must be like in order to be added to our language at all.28 (In fact, it’s generally thought to be obvious that there are such restric- tions: badly behaved expressions such as Arthur Prior’s infamous tonk (1960) are taken to demonstrate that, even on an inferentialist approach, there’s some sense in which con- ventional rules of inference can be defective.) And that means that, even given the above demonstration that our language’s rules of inference are guaranteed to be truth-preserving, it’s not clear that inferentialists can complete the project of providing a full explanation of our near-perfect reliability in certain domains—in order to provide such an explanation, af- ter all, inferentialists would need to explain how it is that we always manage to settle on sets of conventional rules that can indeed be added to our language, and nothing I’ve said here shows that they can do so. So, in order to demonstrate (as I said I would) that the inferentialist approach developed here is one on which linguistic conventions can do the epistemological work conventionalists need them to do, I need to show that there’s a way for inferentialists to close the explanatory gap we’ve just described. I take up this task in the next chapter. 28 Of course, an inferentialist approach does rule out any view on which such restrictions have their source in facts about the convention-independent world, such as the denotation-fixing view discussed above. But the convention-independent world isn’t the only possible source of these restrictions. 103 Chapter 4 Whence Admissibility Constraints? Tolerance and Apriority It is not our business to set up prohibitions, but to arrive at conventions. —Rudolf Carnap 1 Conventionalism without apriority? A threat to the project We’ve seen that, on the inferentialist approach introduced in the previous chapter, we have something that looks superficially like an explanation of our near-perfect reliability in cer- tain nonempirical domains: it turns out that, just as matter of what it is (on an inferential- ist view) for an inference to be truth-preserving, any inference sanctioned by one of our language’s meaning-conferring rules—where these rules are conventionally chosen and are such that it’s no mystery how we can reason in accordance with them without error—is guar- anteed to be truth-preserving. So we might imagine that our vindicatory project is complete (at least on the assumption that the inferentialist approach we’re exploring here is tenable). But this isn’t quite right—there are two gaps still to be filled. The first is the one we dis- cussed at the close of the previous chapter: we haven’t yet shown that inferentialists can fully explain our reliability, since we haven’t shown that they can give an explanation of our abil- ity to avoid settling on defective sets of conventional rules (i.e., an explanation of how it is that we manage always to pick out sets of rules that can genuinely be added to our language). And the second is that, even if inferentialists can in the end explain our reliability, it doesn’t 104 follow that they can provide a story on which the beliefs in question are in good epistemic standing (i.e., that they’re blameless, justified, and warranted). And this is something they need to be able to do: demonstrating that these beliefs are in epistemically justified is, after all, part of our vindicatory project.1 (Here, as in Chapter 1, I’ll be directing my attention, for the most part, to the justification of the beliefs in question—it will generally be obvious that analogous things can be said about these beliefs’ blamelessness and warrant.) In order to show that our inferentialist approach gives us the resources to complete our vindicatory project, then, we need to show that, on that approach, both of these gaps can be filled. The traditional strategy here is as follows. In order to fill the first gap, it’s claimed that any set of conventional rules of inference can genuinely be added to our language—if this claim is correct, after all, it’s no mystery that we manage always to pick out sets of conventional rules that are genuinely meaning-conferring, since any set of conventional rules can be meaning- conferring. The claim here, notice, is a version of Rudolf Carnap’s principle of tolerance, according to which “everyone is at liberty to build up his own logic, i.e. his own form of language, as he wishes” (1934/1937: 52). And it’s a claim that, though undeniably radical, is natural from an inferentialist perspective. Here’s Michael Dummett (1975/1978b: 218), in a discussion of the status of certain forms of reasoning in mathematics, explaining the line of thought (though he doesn’t endorse it): If use constitutes meaning, then, it might seem, use is beyond criticism: there can be no place for rejecting any established mathematical practice, such as the use of certain forms of argument or modes of proof, since that practice, together with all others which are generally accepted, is simply constitutive of the mean- ings of our mathematical statements, and we surely have the right to make our statements mean whatever we choose that they shall mean. We can give a precise statement of the claim in question. Let L be a language, and let w be some word not in L (where a word, as discussed in Chapter 3 above, is a symbolic 1 One might think that explaining our reliability is the hard part and that showing our beliefs to be justified is a a comparatively easy task. But even if this is so, it’s a task that needs to be done. 105 form plus membership in exactly one grammatical category). Then, for any set of rules of inference R, call R a standard set of rules for w relative to L just in case there’s at least one rule in R such that (i) w appears in that rule and (ii) that rule isn’t derivable from the rules already in L together with the rules in R in which w doesn’t appear, if there are any such rules. (The point of these minimal constraints is just to ensure that the rules in R will give some substantial inferential role to w—i.e., that they’ll place requirements on the use of w that aren’t placed on the use of every word in w’s grammatical category.) Our claim, then, can be stated as follows: Tolerance. For any language L and any word w not in L, if R is a standard set of rules for w relative to L, then there’s a language LR , consisting of L with the rules in R (along with any new vocabulary that appears in those rules, including w itself) added, such that the rules in R in which w appears are meaning-conferring for w in LR . If Tolerance is indeed reasonable from an inferentialist perspective, then inferentialists can explain our ability to avoid settling on defective sets of rules and so can give a full explanation of our reliability in the nonempirical domains at issue here. The second gap is filled in a similar way. The basic idea is this: If we have the right to make our sentences mean what we want them to mean, and if what it is for for our sentences to mean what they do is for our reasoning to be governed by certain conventional rules of infer- ence, then it’s hard to see how we can be faulted for reasoning in accordance with those rules. So we might claim that meaning and epistemic justification are intimately related in such a way that all beliefs formed entirely via reasoning sanctioned by our meaning-conferring rules are going to be justified just because they were formed via reasoning sanctioned by our meaning-conferring rules.2 2 There are, I acknowledge, well-known counterexamples to the claim that all beliefs arrived at via even logical rules of inference are justified. For instance, the preface paradox introduced by D. C. Makinson (1965) shows that justification isn’t closed even under inferences sanctioned by and-I. This sort of failure of closure, though, occurs only if we start from premises that are less than certain, and the cases we’re interested in here are cases of provability—i.e., are cases in which we start from no premises at all. (Joshua Schechter (2013b) has pointed out that first-personal doubt about our ability to apply our own rules without error can be another 106 This claim—roughly, that “if a belief-forming method helps to constitute the conceptual role for a concept, any thinker possessing the concept is justified in employing the method”— is a version of the thesis David Enoch and Schechter (2006: 691) call the Meaning-Justifica- tion Link, or The Link for short. We can give a precise statement of it as follows: The Link. If any rule r is meaning-conferring for some word w in a language L, a competent speaker of L is thereby epistemically justified in inferring according to r. If The Link is reasonable from an inferentialist perspective, then inferentialists can provide a story according to which the beliefs we’re trying to vindicate are in fact epistemically jus- tified. What this means is that, if the traditional strategy is indeed available to inferentialists, both gaps can be filled, and our vindicatory project is (from an inferentialist perspective) complete. Unfortunately, though, badly behaved expressions such as Arthur Prior’s infamous tonk (see his 1960) seem to show conclusively that there’s something seriously wrong with this strategy. Tonk, recall, is Prior’s invented connective whose I-rule is the same as one of the I-rules for or and whose E-rule is the same as one of the E-rules for and: A tonk-I A tonk B tonk-E A tonk B B The problem is that adding tonk to a language will trivialize that language: from any sen- tence A it will be possible to derive any sentence B via a tonk-I inference followed by a tonk-E inference. Even worse, if that language already has some sentences that are prov- able (from an empty premise set)—as our language has in the logical truths—then adding tonk to it will render every sentence of the language provable. The rules of the new language, then, will sanction acceptance of every sentence in that language. But that seems absurd. source of failures of closure, but I propose to ignore these sorts of higher-order considerations for the time being.) 107 Now, some theorists, such as Prior himself, have suggested that the lesson of tonk is that we should abandon an inferentialist approach altogether, that we should acknowledge instead that “only what already has a meaning can be inferred from anything, or have any- thing inferred from it” (1964: 191). But it’s not clear that such a drastic step is required. Theorists with inferentialist sympathies have tended to take away a different lesson: that, even on an inferentialist approach, we can deny that the inferences sanctioned by the tonk rules are justified, since we can insist that certain sets of conventional rules of inference are somehow defective and so inadmissible. Notice, though, that, even if these latter theorists can save inferentialism from Prior’s attack, their approach will require abandoning the traditional strategy. After all, that strategy requires both Tolerance and The Link, and those two principles together just entail that inferences sanctioned by the tonk rules are justified: by Tolerance, tonk-I and tonk-E are meaning-conferring in our new language, and so, by The Link, any competent speaker of the new language is thereby justified in inferring according to those rules. To avoid this conclusion, then, we must reject one of those principles. So an inferentialist approach on which we can’t be justified in inferring in accordance with the tonk rules is an approach on which the traditional strategy is unavailable. In short, if inferentialists are to avoid Prior’s result, they must acknowledge that a given set of conventional rules of inference for a word might be inadmissible, which means they must acknowledge either that those rules might fail to give the word a genuine place in the language at all or that they might might give the word a place in the language but neverthe- less fail to genuinely justify the inferences they sanction—what it is for a set of rules to be inadmissible in the sense required is, after all, just for those rules to fail in one of these ways. And in that case, an inferentialist approach can help us complete our vindicatory project only if we can give some explanation of speakers’ ability to avoid these inadmissible sets of rules. Otherwise, we might end up with a kind of failed conventionalism, a conventionalism 108 without apriority.3 So how are we to proceed? The obvious task, I take it, is to identify what the conditions are for a set of rules to be admissible—only when this is done can we begin trying to explain speakers’ ability to reliably settle on sets of rules that meet those conditions. And it’s clear enough what general form these conditions will have to take. Tonk-I, after all, is unprob- lematic on its own—indeed, it’s just the same as one of the I-rules for or. And similarly for tonk-E, which is just the same as one of the E-rules for and. The problem isn’t the rules themselves but the mismatch between them: there’s some sense in which they fail to fit to- gether. The relevant constraint, then, must be one that will rule out such mismatches. So we can say that the rules for a word, in order to be admissible, must fit with one another in a particular way. They must, in the standard terminology, exhibit harmony. But to say that is to name the problem, not to solve it. The question that needs to be answered is what form, exactly, a harmony requirement should take. There’s no shortage of disagreement in the literature as to just what harmony comes to. But what seems to me to have been sometimes underappreciated is that, in order to have a satisfying answer to this question—i.e., the question of what the constraints are on admissibility—it’s important to understand on what basis admissibility is so constrained. That is, it’s important to have an answer to the following question: whatever the constraints are, what’s the underlying reason that those constraints are the right ones? As Schechter and Enoch (2006: 694) note, after all, what we want here is “not some unmotivated qualifica- tion introduced purely to deal with counterexamples, but a principled way of addressing them”. So I suggest that, rather than directly arguing for a particular harmony requirement, we should start with the question of what the source of admissibility constraints is supposed 3 As I mentioned in Chapter 3 above, Jared Warren (2015a, 2015c) suggests that any inferentialist view on which Tolerance (he calls his version of the principle the Meaning Rules Connection) is false doesn’t count as a conventionalist view at all. I’m not convinced this is right. We’ve already seen, after all, that, on an inferentialist approach, the inadmissibility of a set of rules can’t be a matter of those rules’ failure to match up in the right way with the convention-independent world. Inadmissibility, then, must for inferentialists be a matter of the rules themselves, of how they fit together. (We’ll discuss this in more detail below.) And in that case, the falsity of Tolerance doesn’t seem to tell us that the sentences of (say) logic can’t be true by convention—it tells us only that there are structural constraints on what sorts of conventions can make sentences true. 109 to be in the first place. This question, then, will be my primary concern in this chapter. With that in mind, we can return our attention to the traditional strategy whose failure tonk is supposed to have demonstrated. Again, if inferentialists are to allow for constraints on admissibility and so to avoid Prior’s result, they must abandon either Tolerance or The Link. So whatever reason there is for constraining admissibility is going to be, at least in part, a reason for abandoning at least one of those two principles. So we can focus our discussion by investigating what reason there might be, from an inferentialist perspective, to abandon either principle. The bulk of the present chapter, then, is devoted to carrying out this investigation. I consider the various suggestions for constraining admissibility that have been offered by in- ferentialists, with a view to determining whether there is implicit in any of these suggestions a genuine reason for abandoning either Tolerance or The Link. It turns out that (with one mi- nor qualification I discuss in §3) there isn’t—no inferentialist-friendly basis for abandoning either principle has ever been offered. Furthermore, there’s reason to believe that no such basis can be offered. The conclusion of this part of the chapter, then, is that inferentialists should continue to accept both Tolerance and The Link. There’s good and bad news here. The good news: if inferentialists are indeed committed to Tolerance and The Link, that means the traditional strategy for completing our vindicatory project turns out still to be available to them. The bad news: it also means inferentialists are committed to the claim that, by adding tonk to our language, we can become epistemically justified in inferring any sentence from any sentence—indeed, can become epistemically justified in accepting every sentence of the language. And this, it might be thought, is just to say that inferentialism itself leads to absurdity and so must be abandoned. What I want to suggest, though, is that abandoning inferentialism on these grounds would be too hasty. I close this chapter, then, by arguing that we can explain, consistently with Tolerance and The Link, just why it is that we take the tonk rules to unacceptable. The idea, roughly, is that the problem with those rules is entirely pragmatic: if we decide tomor- 110 row to add tonk to our language, the result will be a legitimate, unproblematic language that, as it happens, isn’t at all useful to us. That is, the language will be such that we’re justified in accepting every sentence of it, but those sentences won’t say, in the new language, what they say in our present language: they’ll all be (something like) logical truths. So we won’t be able to express in the new language anything that isn’t expressible via a logically true sentence of our present language.4 Such a language would, for obvious reasons, be pragmatically far worse than our present language. That, I claim, is our reason—our only reason—for denying tonk (and similar expressions) a place in our language. If this is right, then tonk presents no real problem either for inferentialism itself or for conventionalists’ traditional strategy for completing our vindicatory project. And that means this strategy remains available. A fuller explanation of that conclusion, though, will have to wait until §5. First we need to explore inferentialists’ prospects for rejecting Tolerance and The Link. 2 A note on truth preservation We can start by noting that the following is a matter of simple logic: anyone who wishes to insist that tonk is inadmissible is thereby committed either to the thesis that there can be words that are inadmissible despite having inferential roles or to the thesis that there can be standard sets of rules that nonetheless fail to determine genuine inferential roles for the words that appear in them. So we need to determine whether we can make sense of either of these theses from an inferentialist perspective. These theses are best addressed in turn—we’ll discuss the former in §3 and the latter in §4. But before we begin in earnest, it will be helpful to note that a number of strategies for dealing with the tonk problem that have been influential in the literature, whatever their 4 Incidentally, Warren (2015c: 9) has independently arrived at this same conclusion: Despite its complexity, Tonklish [i.e., English with tonk added] is an expressively weak lan- guage; for that reason, Tonklish is a useless language. But to say that a language is useless is not to say that it is impossible. For further discussion of Warren’s view, see §4 below. 111 other merits, are ruled out immediately on an inferentialist view. The reason is simple: these responses proceed by appeal to the claim that there’s no way to set up our language such that the tonk rules are truth-preserving, and this claim is, for reasons we discussed in Chapter 3, incompatible with an inferentialist perspective. Consider, for instance, J. T. Stevenson’s response to Prior. Stevenson claims that, al- though we can give a word meaning by laying down some rules of inference, this doesn’t yet show that inferences in accordance with those rules are justified: “In order to completely justify an inference…we must vindicate the rule [permitting that inference] by showing that it…permits only valid inferences, an inference being valid in this sense if and only if it is such that when the premises are true the conclusion must be true” (1961: 125–126). His view, then, is one on which we can justify an inference only by demonstrating it to be truth- preserving—in effect, he denies The Link. The problem with this view (for inferentialists, anyway—I make no claim about the merits of the view on a noninferentialist approach) is that, although Stevenson insists that the tonk rules can’t be vindicated, we know that this claim, according to inferentialism, is incorrect. As we’ve seen, after all, our inferentialist approach entails (given that the tonk-language is a genuine language at all) that all of the rules of the tonk-language are trivially truth-preserving. (This, notice, entails, further, that the sentences of the tonk-language, since all of them are provable, are all true, and true no matter what the convention-independent world is like—they’re something like trivial logical truths.) What this means is that Stevenson’s strategy—i.e., denying The Link on the grounds that the tonk rules, unlike legitimate rules, can’t be shown to be truth-preserving—is unavailable to inferentialists. And the same goes for any attempt to deny The Link on the basis of some truth-theoretic distinction between sets of rules that are admissible and those that aren’t. On an inferentialist view, after all, there just can’t any such distinctions—all meaning-conferring rules are trivially truth-preserving. Similar considerations also show that there can be no truth-theoretic grounds for aban- 112 doning Tolerance. Consider, for instance, the response to Prior endorsed by both Steven Wagner (1981) and Christopher Peacocke (1987): that tonk can’t genuinely be added to our language at all because there exists no possible denotation for it that can make the tonk rules truth-preserving. It’s evident, once again, that this claim, whatever its other merits, is obviously false on an inferentialist view—what Wagner and Peacocke are relying on is some- thing like the denotation-fixing view we discussed in the previous chapter, a view which we’ve already seen to be incompatible with inferentialism. Here, then, is where we are. A set of rules consisting of the rules of our current language together with the tonk rules either does or does not successfully determine inferential roles for the expressions it contains. If it doesn’t, then it’s not a genuine language, and so our explanation of tonk’s inadmissibility is simple: Tolerance is false. So suppose it does. In- ferentialists might insist that this isn’t enough for tonk to be admissible, that a system in which expressions have inferential roles must meet certain additional conditions in order to genuinely count as a language—despite the fact that it doesn’t need to meet any addi- tional conditions in order for the inferences sanctioned by the rules in that system to be truth-preserving. But what’s really at stake here? Are we “simply talking about different pos- sible social practices and disagreeing about whether they deserve the honorifics ‘language’ or ‘meaningful”’ (Warren 2015c: 19)? If so, this isn’t a disagreement that’s of any interest, since the distinction between those systems that do count as languages and those that don’t isn’t doing any theoretical work. Our real concern here is just whether there can be systems of rules such that speakers aren’t justified in inferring in accordance with those rules despite the fact that those rules are trivially truth-preserving. This, then, is the question that should be our focus. The point here is that, if inferentialist are to avoid the conclusion that we can add tonk to our language and thereby come to be epistemically justified in reasoning according to the tonk rules, they must motivate one of the two following theses: (i) that a standard set of rules for an expression might fail to give that expression any inferential role whatsoever 113 or (ii) that some rules that successfully give an expression an inferential role might never- theless be such that, despite its being trivial that those rules (insofar as they’re rules of a genuine language) are truth-preserving, those rules can’t be adopted in such a way that they epistemically justify the inferences they sanction. We’ll consider these theses in turn. 3 Rules and roles: Followability conflicts As far as I can tell, the only way for our rules to fail to determine any inferential roles at all for our expressions is for those rules to be somehow unfollowable. If the rules are followable, after all, then the role that a given word plays in inference will just be given by its place in the web of inferences that proceed according to the rules. The question, then, is whether there are sets of rules that are jointly unfollowable, and the answer is that there are. Any rules that give contradictory directions are thereby jointly unfollowable, and there can certainly be rules that give contradictory directions. Suppose, for instance, that one of the rules of a chesslike game directs white to play first and another directs black to play first—those rules can’t both be followed, and so the game is defective. And similarly for language games: if the rules of inference are in conflict in the sense of giving contradictory directions, then they’re jointly unfollowable and so can’t specify inferential roles for the words contained in them. Both Nuel Belnap (1962) and Paul Horwich (1998a) claim that this sort of thing is what’s happening in the case of tonk. In Belnap’s case, the relevant claim is that our understanding of inference isn’t exhausted by the first-order deductive rules I’ve been discussing; we also have rules governing the notion of deducibility itself. In other words, we are not defining our connectives ab initio, but rather in terms of an an- tecedently given context of deducibility, concerning which we have some definite notions. …If we note that we already have some assumptions about the context of deducibility within which we are operating, it becomes apparent that by a too careless use of definitions, it is possible to create a situation in which we are forced to say things inconsistent with those assumptions. (Belnap 1962: 131) The idea is that our language includes structural rules that tell us how to understand de- 114 ducibility itself, and these rules will require us not to accept certain claims that, were we to add tonk to our language, we’d also be required to accept. And since it’s not possible to both accept and fail to accept the same claim at the same time, the addition of the tonk rules to our language would make our rules of inference jointly unfollowable. The structural rules in question are highly general—they tell us about deducibility rela- tions that obtain universally, independently of any sentence’s internal form. We can think of them as some of the meaning-conferring rules for our deducibility sign. Here are some examples of the sort of rule Belnap has in mind, given as Gentzen-style sequent rules: A1 , . . . , A n ⊢ C A1 , . . . , Am ⊢ B C1 , . . . , Cn , B ⊢ D Weakening Transitivity A1 , . . . , An , B ⊢ C A1 , . . . , Am , C1 , . . . , Cn ⊢ D The crucial idea, for Belnap (1962: 132), is that the full set of structural rules will “ex- press all and only the universally valid statements and rules expressible in the given nota- tion”. That is, if some deducibility relation obtains between sentences of arbitrary form, that fact will be provable via the structural rules, and so we’re not to accept any claim about de- ducibility relations between sentences of arbitrary form unless that claim is provable via the structural rules. Note that this requirement doesn’t follow from the structural rules alone— what Belnap is proposing is that, on top of the structural rules themselves, there’s another meaning-conferring rule for our deducibility sign, one that tells us to refrain from accepting deducibility claims that the structural rules don’t prove. And here’s one such claim: “A ⊢ B”. Belnap’s view, then, is that it just follows from our notion of deducibility that it’s not the case that any arbitrary sentence is deducible from any other arbitrary sentence. If all that’s right, then our notion of deducibility requires us not to accept “A ⊢ B”. But if we added tonk to our language, we’d be required to accept “A ⊢ B”, for the simple reason that the rules governing tonk would indeed render every sentence of the language provable (and so derivable from any other sentence). And it’s obviously impossible to meet both of those requirements at the same time, which means that the meaning-conferring rules for 115 tonk and the meaning-conferring rules for our deducibility sign are jointly unfollowable. Horwich’s view is similar, but in his case the relevant claim is a bit simpler: It is plausible that amongst the facts of use that constitute the meaning of “not” is that one not tend simultaneously to assert “p” and “not p”. Thus the “tonk” rules cannot be followed by a community that has a term meaning negation. (1998a: 139) The basic idea here can be stated as follows: the set of meaning-conferring rules for not includes (or implies) a requirement that we refrain from accepting both a sentence and its negation at the same time. And if that’s right, then it’s impossible to follow both the meaning- conferring rules for not and the meaning-conferring rules for tonk at the same time, since the rules for tonk would require us to accept every sentence of the language. Evaluating Belnap’s and Horwich’s proposals If either Belnap or Horwich is right about what the rules of our language require, then we have an explanation of why the rules for tonk, if added to our language, would fail to de- termine an inferential role for that connective. So: do the rules of our actual language really require what Belnap and Horwich say they require? This is, at least in part, an empirical question. It’s worth noting, though, that, if the rules of our language are as I’ve described them—that is, if every rule of our language takes the form of a natural deduction rule—then it’s not possible to get either Belnap’s or Horwich’s view off the ground. Natural deduction rules, after all, tell us only what we’re required to accept; they can’t tell us anything about what we’re required not to accept. Since the rules described by Belnap and Horwich both tell us what not to accept, those rules can be no part of a natural deduction calculus. So if the rules of our language really are representable in a natural deduction calculus, then neither Belnap nor Horwich can be correct.5 5 Note, further, that each of the structural rules laid out by Belnap is trivial given our picture on which a rule of inference is something that directs us to accept a certain sentence when we already accept some other sentences. Weakening, for instance, says that our concept of deducibility is one on which when a sentence is deducible from some set of sentences, it’s also deducible from a superset of that set, and this is trivial on our picture: if the rules tell us to accept a sentence S in all cases in which we accept some other sentences, 116 Let’s examine, then, whether it’s plausible that the rules of our language are so repre- sentable. We can focus our discussion by concentrating first on Horwich’s view and the rules for negation. Horwich’s claim is that it’s part of what constitutes the meaning of our negation sign that we refrain from accepting a sentence and its negation at the same time. But consider a language in which the rules governing the negation sign are just the natural deduction rules for negation: [A] [A] [¬A] [¬A] .. .. .. .. .. .. .. .. B ¬B ¬-I B ¬B ¬-E ¬A A As I’ve said, it doesn’t follow from these rules that we shouldn’t accept a sentence and its negation at the same time. One way to see that this is so is to consider what happens if we add tonk to a language whose rules for negation are ¬-I and ¬-E. In that case, the rules for tonk require us to accept every sentence of the language, which means, a fortiori, that they require us to accept the conclusion of any ¬-I inference or ¬-E inference. But that’s just what ¬-I and ¬-E themselves require us to do. So if we do what the rules for tonk require, we’ll trivially satisfy the requirements set out by the rules for negation—despite the fact that, for each sentence in the language, we’ll be accepting both it and its negation at the same time.6 In such a language, then, negation isn’t governed by the rule Horwich describes. What’s true in the neighborhood, though, is that, since the principle of explosion is derivable from ¬-E, the rules of such a language require thinkers either to refrain from accepting both a sentence and its negation or to accept every sentence of the language. And there are practical those rules certainly tell us to accept S when we accept those other sentences and some additional sentences. (Similarly for Transitivity: if our rules tell us to accept a sentence S1 whenever we accept the sentences in Γ and to accept S2 whenever we accept both S1 and the sentences in ∆, then, when we accept the sentences in both Γ and ∆, the rules tell us to accept S1 , in which case the rules tell us to accept S2 since we now accept both S1 and the sentences in ∆.) What’s striking here is that, though these structural rules follow trivially from our picture of how inference rules work, Belnap’s further requirement that we not accept A ⊢ B does not follow from that picture. This suggests that the additional requirement doesn’t have the same status as the structural rules. 6 The standard negation rules’ failure to exclude this sort of unintended interpretation is discussed at some length by Carnap (1943: §§16–19)—he demonstrates, among other things, that the principle of noncontradic- tion (metalinguistically formulated) isn’t entailed by the standard deductive rules of propositional logic. 117 reasons not to accept every sentence of one’s language, since the language will be useless if it doesn’t allow thinkers to draw distinctions. So, in circumstances where users of the language have a choice, they will, presumably, be moved by these practical considerations. As a result, these speakers, as long as they’re not required to accept every sentence of the language, will tend to behave in accordance with the rule Horwich describes regardless of whether that rule is any part of what constitutes the meaning of the negation sign.7 But what does all this tell us about our actual language? We know that, if our rules for negation are just ¬-I and ¬-E, then the addition of tonk wouldn’t render the rules of our language jointly unfollowable in the way Horwich claims, and so we know that, if Horwich wants to establish that the rules for tonk would fail to determine an inferential role for that connective in our language, then he needs to show that our rules for negation aren’t just ¬-I and ¬-E but include a requirement that we refrain from accepting any sentence along with its negation. And we also know that he hasn’t shown this. It’s true enough that we do in fact tend to refrain from accepting any sentence along with its negation, but that fact is easy to explain without appeal to the rule Horwich describes: in circumstances in which we’re not just required to accept every sentence of the language, after all, ¬-I and ¬-E by themselves will, given our practical interest in maintaining the ability to use our language to draw dis- tinctions, tend to produce behavior that meets the requirement in question regardless of whether our rules for negation really include that requirement. This observation—together with the fact that, when we try to formalize our own inferential practice, the negation rules we converge on are just ¬-I and ¬-E8 —gives us at least some evidence that our inferential practice is governed by rules very much like ¬-I and ¬-E. 7 Notice that this sort of reasoning can also be used to show that the rule described by Belnap is superfluous. If we already have practical reason to refrain from accepting every sentence of the language, no additional rule of the language is required to keep us from doing so. As long as our rules don’t require us to accept every sentence of the language, they’ll tend to generate behavior consistent with the rule Belnap describes. 8 Or are, at least, equivalent to ¬-I and ¬-E. Standard presentations of classical logic include double negation elimination (DNE) rather than ¬-E, but if we have ¬-I, then it’s trivial to derive ¬-E from DNE and vice versa. (For ease of presentation I’ve assumed that natural-language negation is classical.) It seems to me that ¬-E is a more fundamental rule for our understanding of negation than is DNE, though there’s no need to argue for that claim here. 118 Even given all that, it’s of course possible—perhaps even plausible—that, in our actual language, negation’s set of meaning-conferring rules does include the one Horwich describes. As I said, this is an empirical question. What the considerations here show is just that there’s a possible language very much like ours that doesn’t include any such rule. Let’s suppose, though, that Horwich is right that our actual rules for negation include the one he describes, with the result that the addition of tonk to our language would in- deed render the rules of the language jointly unfollowable. Even in this case, Horwich hasn’t yet established everything that needs to be established, for all he’s shown is that we can’t have a language with both the negation rules and the tonk rules. That is, what he’s shown is just that we can’t add tonk to our language without making accommodating adjustments elsewhere, and so we still need an explanation of why we can’t make those accommodating adjustments.9 The question of how to proceed when a proposed addition to the language would cause a conflict is quite general. And one answer to this question is simple: there are extremely strong practical reasons not to radically change the rules for words already in our language, and so our best option will usually be to avoid adopting a new word whose accommodation would require this sort of radical change. But practical reasons aren’t what we need here. It’s already clear enough, after all, that there are strong practical reasons for avoiding a connective that would render every sentence of our language provable, and at any rate the fact that we have practical reason to avoid radical changes doesn’t tell us anything about what our language would be like if we did make those changes. In particular, it doesn’t tell us whether the rules of the resulting language would determine inferential roles for the words of that language. And that’s what we need to know. In any case, the change to our existing language that’s required isn’t even all that radical 9 It would be a different story if the tonk rules were in conflict with one another. In that case, of course, no accommodating adjustment would be possible. An expression whose rules were in conflict with one another in this way would indeed be one whose rules failed to determine an inferential role for that expression, no matter the language in which it was embedded. But most of the problematic expressions discussed in the literature, including tonk, don’t have this characteristic. 119 in the case of tonk, where our rules for negation are purportedly the source of the problem. Suppose again that our actual rules for negation do include the requirement that we refrain from accepting any sentence along with its negation. Then what I’ve shown above is that this requirement is eliminable: given that we have practical reason to avoid accepting every sentence of our language, we can throw out our actual rules for negation and replace them with ¬-I and ¬-E, and this procedure will leave our linguistic practice largely undisturbed. Furthermore, these new rules will play nice with the tonk rules in a way that the old ones didn’t. That is, after we’ve made the proposed adjustment, it will no longer be the case that our negation rules and the tonk rules are jointly unfollowable. So we might wonder: why can’t we proceed by first making this accommodating adjustment to our rules for negation and only then adopting the rules for tonk? As long as we think the only problem with the tonk rules is that they’re in conflict with other rules of our language, we have no resources with which to answer this question, for to proceed in the proposed way would resolve the conflict that, by hypothesis, is the only thing disallowing the adoption of tonk. So if we want to say that this sort of move isn’t available to us—indeed, if we want to be able to claim that tonk and negation aren’t in some sense on equal footing when it comes to admissibility—then we must deny that this sort of followability conflict is by itself the source of tonk’s inadmissibility into our language. So Horwich’s view here isn’t tenable. It’s clear enough how analogous reasoning can be applied to Belnap’s view. Belnap, recall, takes there to be a followability conflict between the rules for tonk and the rules for our deducibility sign. The question in his case, then, is why we can’t make an adjustment to the deducibility rules in order to accommodate tonk—again, an adjustment of this kind would resolve the conflict that is purportedly all that’s keeping us from adding tonk to our language. And the point generalizes further. It applies in any case where adding a new word to our language would render the rules of the language jointly unfollowable. For there will always 120 be some accommodating adjustment we can make that will resolve this followability conflict. In some cases, to be sure, the only way to resolve the conflict will be to make an adjustment that’s quite radical. But no matter how radical the required adjustment is, we’ll always be faced with the question of why we’re not permitted to make it. And consideration of the followability conflict itself just can’t help us answer that question, since the adjustment, by hypothesis, would resolve that conflict. A second followability argument Horwich does provide another followability argument for denying that tonk can be a part of any genuine language, an argument that appeals to the role of belief in motivating action: One of the characteristic features of belief states is that they enter into deliber- ations resulting in action. …In so far as only one of several possible actions, in a given situation, is actually performed, it must be…that the agent possessed the beliefs that would lead, given his desires, to that action, and did not possess any of the combinations of belief that would have dictated some other action. Therefore it is necessary that regularities in the use of certain expressions spec- ify circumstances in which certain things are not to be believed. (1998a: 139fn) He clarifies this line of argument later: Since it is essential to belief states that they play a distinctive role in our delibera- tions—in our deciding to do one thing rather than another—[the propensity to believe everything] would be inconsistent with the very notion of belief. (2005: 153) The idea seems to be that the tonk rules, which would require us to accept every sentence of our language, are unfollowable in the sense that to reason in accordance with them would be to cease having beliefs at all.10 Paul Boghossian (2003a: 244) agrees that the tonk rules are unfollowable in this way: “To follow those rules one would have to be prepared to infer anything from everything, and that is no longer recognizable as belief or inference”. 10 Incidentally, Horwich seems to have gradually come to take this claim—not the negation point I’ve been discussing—to be what’s really significant in the case of tonk. While the claim appears only in a footnote in his 1998a, it and the negation point share space in the main text in his 2005, and by his 2008, the negation point has dropped away completely. 121 As I understand this proposal, it rests the following claim about the nature of belief: it’s of the essence of belief that beliefs play a role in rationally motivating action, so that (given the connection between language and belief) it’s just not possible for a language to have rules that would render speakers unable to use beliefs to help them choose among possible courses of action. And since the tonk rules would require us to have the same attitude toward every sentence of the language, those rules would indeed render us unable to use our beliefs to choose among possible courses of action, and so it’s impossible for us to admit the tonk rules into our language. So: can this proposal be made to work? The first thing to note here is that belief, as usually understood, is an attitude toward propositions rather than sentences and so isn’t really the attitude we should be discussing. We should first consider the essential nature of the analogous attitude toward sentences, which I’ve been calling acceptance. The question at hand, then, is whether adoption of the tonk rules would make it impossible to accept our sentences, not whether adoption of those rules would make it impossible to believe propositions expressed by those sentences. What our rules require, after all, is only acceptance of sentences. So, since what’s in question at this point is just whether the rules are followable, the nature of acceptance, not belief, is what’s relevant. (This isn’t a problem for Horwich himself—he takes acceptance of a sentence to be equivalent to belief in the proposition it expresses. Still, it’s worth clarifying.) What are we to make, then, of the claim that it’s essential to our acceptance of sentences that this acceptance plays a role in rationally motivating action? I want to start by noting that, on certain independently motivated accounts of acceptance, that claim is straightforwardly false. Take, for instance, the behaviorist account on which acceptance of a sentence is to be understood in terms of a being’s dispositions to respond in certain ways to interlocutors’ assertions of that sentence, along the lines W. V. O. Quine (1960b) describes in his discussion of the field linguist’s attempt to understand the speaker of an unknown language based on patterns of assent and dissent. It seems clear enough that, on such a Quinean account, the tonk rules turn out to be followable. Any being, after all, can follow those rules just by 122 accepting every sentence of the language, and it’s easy enough to imagine a being disposed to respond to every assertion in a manner characteristic of assent. On this sort of account, acceptance need not play any role in rationally motivating action. To be clear, my claim isn’t that this particular account is correct. But we certainly need some account of acceptance, and, as Horwich well knows,11 that account will need to be given in nonsemantic terms. The question is whether we should include in our account essential connections between acceptance and nonlinguistic behavior,12 and the point is just that it’s not obvious that we should: accounts have been given that don’t include such connections. The claim that it’s not simply contingent fact that the sentences we accept play a role in ra- tionally motivating our nonlinguistic behavior—i.e., the claim that this connection between acceptance and action is essential to acceptance—requires argument, and none, so far as I can see, has yet been given. Horwich’s view, of course, is that an adequate account of acceptance will include such essential connections. As I’ve been suggesting, I’m inclined to disagree. But it’s not clear how best to settle the question. Thought experiments might help—it seems to be a consequence of Horwich’s view, for instance, that it’s not possible for beings physically incapable of non- linguistic behavior to accept any sentences at all, and that seems problematic. Ultimately, though, I suspect that our disagreement here is merely terminological. It’s not in question, after all, that it’s possible in principle for a being’s linguistic behavior to be such that the being appears, at least superficially, to accept all the sentences of the tonk-language. (The Quinean speaker mentioned above is one example of such a being.) The question is this: do we really want to say that such a being counts as accepting those sentences, or do we want to conceive of acceptance as something more robust? And it’s here where Horwich and I are inclined to give different answers. This difference, though, doesn’t seem particularly sub- 11 See, e.g., his 1998a: 94–96 and his 2005: 37. 12 The distinction between linguistic behavior and nonlinguistic behavior is, admittedly, somewhat rough and ready. The basic idea, though, is simple enough: we can distinguish behavior that’s still in some sense inside the language game from game-external behavior we engage in to manipulate the world for our ends, and the behavior Horwich is concerned with is the latter. 123 stantive, especially since Horwich and I are in full agreement that the tonk-language is so obviously useless that it’s hard to think of a realistic scenario in which a being would ever adopt it. That said, I think there’s reason to prefer my answer over Horwich’s. To see what that reason is, note first that one effect of adding tonk to our language is to turn every sentence into one we’re required to accept regardless of what the convention-independent world is like. It follows, as we’ve seen, that, insofar as the tonk-language counts as a genuine language at all, it’s a language all of whose sentences are (something like) logical truths. And even if it’s true that our attitudes toward empirical sentences have a substantial role to play in rationally motivating action, it’s plausible that our attitudes toward logical truths do not. But this doesn’t prevent us from accepting logical truths. To put the point in terms of belief rather than acceptance: In our actual language, many beliefs do play a role in motivating action. But logical truths seem to be such that beliefs in them can’t rationally motivate action in the way empirical beliefs can. And this is un- problematic. Plausibly, it’s just a fact about the nature of belief that beliefs in logical truths don’t play the sort of motivating role played by empirical beliefs. And we can think of the tonk-language as one in which every sentence is a logical truth, in which case it’s perfectly consistent with the nature of belief that the beliefs of a speaker of that language will play no role in motivating action. If that’s right, then Horwich’s proposed explanation rests on a false premise and so must be rejected. Though the above considerations give us, I take it, good reason not to accept Horwich’s proposal, they rest on the plausibility of certain claims about the nature of acceptance and belief and so aren’t entirely conclusive. But even if we set aside those considerations alto- gether and suppose that Horwich’s proposed explanation is correct for the specific case of tonk, that proposal just can’t be correct as a general explanation of the inadmissibility of problematic expressions. Even if we suppose that the tonk rules, by requiring us to accept every sentence of the language, do make it impossible for us to have any beliefs at all, it 124 still turns out, as Boghossian (2003a: 244) explains, that there are other problematic words about which it’s clear that “no such extreme claim can be made”. In particular, no such claim is available to explain inadmissibility in cases where the proposed addition would license some objectionable inferences but wouldn’t trivialize the entire language. Suppose, for ex- ample, that the slur Boche works roughly as Dummett (1973) claims: x is a German Boche-I x is a Boche Boche-E x is a Boche x is cruel Then we can, just by introducing Boche into our language, become required to accept ⌜x is cruel⌝ for every x for which we accept ⌜x is a German⌝. This seems problematic—in- deed, it seems problematic in roughly the same way that the consequences of adopting tonk seem problematic. But the Boche rules don’t require us to accept every sentence of our lan- guage, and so it’s just not the case that inferring according to those rules would prevent thinkers from having beliefs that played a role in motivating action. So Horwich’s proposed explanation is, at best, incomplete: even if we suppose for the sake of argument that it’s cor- rect for the case of tonk, other cases of inadmissibility will need a different explanation. The need to revise Tolerance Before we move on, it’s worth noting that the considerations brought out by Horwich and Belnap, though they don’t explain tonk’s inadmissibility, do show that Tolerance is, strictly speaking, false—it fails to account for the possibility that a set of rules rules might be jointly unfollowable and so might fail to give the words that appear in those rules any genuine inferential roles at all. The principle, though, will be true if we restrict it to cases where there’s no followability conflict. So we need to revise as follows: Revised Tolerance. For any language L and any word w not in L, if R is a standard set of rules for w relative to L such that the rules in R and the rules in L are jointly followable, then there’s a language LR , consisting of L with the rules in R (along with any new vocabulary 125 that appears in those rules, including w itself) added, such that the rules in R in which w appears are meaning-conferring for w in LR . Note that, in replacing Tolerance with Revised Tolerance, we haven’t in any way com- promised our ability to explain how we manage always to pick out sets of rules of inference that can genuinely be added to our language. Our explanation, recall, was simply that any set of conventional rules of inference can genuinely be added to our language. And Revised Tolerance is consistent with this explanation. As we’ve seen, after all, the conventions of a natural language are going to have to be implicit, and implicit conventions plausibly are dispositional—what it is for a given set of implicit conventions to be in place is for the be- havior of the members of the linguistic community to be governed by a particular sort of complex of dispositions. It’s not even possible, then, for a set of rules that we’ve adopted by implicit convention to fail to be jointly followable: for a set of rules to be our rules, our behavior must be governed by the disposition to follow them, and our behavior just can’t be governed by dispositions to follow rules that are literally impossible to follow.13 So it re- mains the case, even given our replacement of Tolerance with Revised Tolerance, that any set of rules of inference, if it’s a set on which we can genuinely settle by convention, can genuinely be added to our language. At this point it seems hard to avoid the conclusion that, if we want a general explanation of the inadmissibility of problematic expressions, the failure of sets of rules to determine genuine inferential roles isn’t the right place to look. So we’ll have to look more directly at what the circumstances are in which the rules that constitute a given inferential role do our do not confer epistemic justification on the inferences they sanction. 4 Proof-theoretic constraints on justification Revised Tolerance and The Link together entail that any set of rules of inference is such that, if we reason in accordance with those rules as a matter of convention, we’re thereby epistem- 13 But see Carrie Jenkins and Daniel Nolan 2012 for an opposing view. 126 ically justified in so reasoning. That is, rules we obey as a matter of convention justify the rea- soning they sanction simply because they are the rules we obey as matter of convention—we might describe such rules as self-justifying. So inferentialists must, if they want to say that we can’t become justified in inferring anything we like from anything else merely by reasoning in accordance with the tonk rules, deny that all the rules we obey as a matter of convention are self-justifying. But if such rules aren’t all self-justifying, then an explanation is needed of why some are justified and others aren’t.14 So the question is what sort of explanation is available to the inferentialist here. As we’ve already seen, no truth-theoretic notions are available to inferentialists looking for an explanation of what justifies our rules—it’s trivial, after all, that all of our rules will be truth-preserving. That means we must look elsewhere, and the only place left is in the rules themselves. As Peter Milne (1994: 50) puts the point: Prior’s example shows that some constraint must be placed on the permissible introduction and elimination rule(s) governing a logical constant. …Naturally, if the proof-theoretic approach is not to stray into properly semantic territory these constraints must have their basis in inferential considerations. That is, consideration of the proof system itself must provide us with the resources to explain why certain sets of conventional rules are justified and others aren’t. (Though Milne’s concern here is with the logical constants in particular, the point applies more broadly.) So we need to consider how that might work. As far as I can tell, there are two options here: inferentialists must claim either that certain of our rules are justified by other rules or that certain of our rules are justified because of some structural features of the particular proof systems in which they’re contained. So we’ll consider these theses in turn. 14 Equivalently: an explanation is needed of why speakers are justified in inferring in accordance with certain of these rules but not others. It will be convenient here to speak in terms of a rule itself being justified rather than repeating ad nauseam the expression “rule according to which thinkers are justified in inferring”. 127 Reduction procedures First up is the claim that our rules, when they’re justified, are justified by other rules, and there’s an immediate problem with this thesis: if we’re to avoid regress and circularity, it can’t be the case that all of our (justified) rules are justified by other rules. There must also be some set Ψ of rules that are justified in some other way, with the rules in Ψ then serving as the justificatory foundation for the rest of the rules. The obvious question, then, is how the rules in Ψ—I’ll call them Ψ rules—are justified. For authors working in the tradition of proof-theoretic semantics, the usual way of pro- ceeding is to claim that the Ψ rules, but not the other conventional rules, are self-justifying. According to Dummett (1991: 245–246), for instance, If…there is to be a general proof-theoretic procedure for justifying logical laws, uncontaminated by any ideas foreign to proof theory, there must be some logical laws that can be stipulated outright initially, without the need for justification, to serve as a base for the proof-theoretic justification of other laws. (Dummett here, like Milne above, is concerned with the rules of logic in particular, whereas our concern is with all the I- and E-rules in our language. This narrower concern is common among authors working in the proof-theoretic tradition for a few reasons, not the least of which is that inferentialism is often thought to be more plausible as an approach to the logical constants than it is as an approach to nonlogical expressions. But our discussion here is about inferentialism as applied to the whole of language, which means we need to consider the justifactory status of all of our rules, not just the ones governing the logical constants. So we’ll be ignoring the tendency of authors in the proof-theoretic tradition to restrict their discussions to logic.) Now, we haven’t yet specified which rules are in Ψ. But let’s suppose it’s right that these rules, whatever they are, are self-justifying. Then the claim is that these foundational rules serve to justify the rest of our rules. It’s not obvious, though, how that justification is sup- posed to work. It’s clear enough how some rules are justified in virtue of others: any rule that’s derived from others owes its justification at least in part to the justification of the rules 128 from which it’s derived. But it’s also clear that this can’t be how the rules in question are justified—they’re basic rules, not derived ones. So if their justification has its source in other rules, it must be in some different way. What’s usually suggested here is that the Ψ rules give sense to the expressions of our language, and other rules are justified only insofar as they respect the senses afforded by the Ψ rules. This idea is evident in the following suggestive (if somewhat cryptic) remark of Gerhard Gentzen’s, which is arguably the origin of all proof-theoretic semantics: The introductions represent, as it were, the “definitions” of the symbols con- cerned, and the eliminations are no more, in the final analysis, than the con- sequences of these definitions.… In eliminating a symbol, the formula, whose terminal symbol we are dealing with, may be used only “in the sense afforded it by the introduction of that symbol.” (1934/1964: 295) For Gentzen, the rules that give sense—i.e., the Ψ rules—are the I-rules, and the E-rules are justified because they respect the sense given by the I-rules. But what’s important for the moment is just that there’s some subset of rules that serve as the justificatory foundation for the rest. It’s obvious, given the scare quotes and figurative language, that Gentzen’s remark stands in need of some clarification—after all, the E-rules aren’t consequences of the I-rules in the usual sense. And Gentzen does try to clarify in a brief discussion of the rules for the material conditional. Those rules, for reference, are: [A] .. .. B ⊃-I A A⊃B ⊃-E (MP) A⊃B B For Genzten (1934/1964: 295), since ⊃-I allows us to introduce a sentence of the form ⌜A ⊃ B⌝ just in case we already have a derivation of B from A, the sense of the conditional is given by the fact that “what A ⊃ B attests is just the existence of a derivation of B from A”. And if that’s the sense of the conditional, then we can know, given that ⌜A ⊃ B⌝ is in our premise set, that there’s a way to derive B from A. So, if A is also in our premise set, we can prove 129 B. And this inference—from premises ⌜A ⊃ B⌝ and A to conclusion B—is exactly what’s sanctioned by ⊃-E. That, for Gentzen, is how the justification of ⊃-E has its source in ⊃-I. Dag Prawitz (1974, 2006) develops this idea. In his terminology, some inferences are canonical for a given expression. In particular, inferences sanctioned by whichever of a given expression’s rules are in Ψ—the expression’s I-rules, according to Prawitz and Gentzen’s view—count as canonical. Any canonical inference, then, is justified, and noncanonical in- ferences are justified just in case (setting aside some irrelevant complications) proofs involv- ing them can be converted into proofs that involve only canonical inferences. We can sketch out an example of how this works by again considering the material con- ditional. Suppose that one of the steps of a proof is a ⊃-E inference and that every step before that one is justified. What we need to determine, then, is just whether the ⊃-E inference is itself justified. The two premises of that inference are, of course, some A and ⌜A ⊃ B⌝. So what we have is something of the form D E A A⊃B B where D is a proof of A and E is a proof of ⌜A ⊃ B⌝. Now, given that all the inferences in E are justified, we know it can be converted into a proof in which every inference is canonical, which means it can be converted into one in which the conclusion ⌜A ⊃ B⌝ is reached via a ⊃-I inference (since, on Prawitz’s view, only introductions are canonical). So what we have is [A] .. .. D B A A⊃B B But now it’s apparent that, in order to prove ⌜A ⊃ B⌝ in the first place, we derive B from A. And since we already have a proof D of A, we can easily construct a proof of B that doesn’t 130 require ⊃-E by appending to D our derivation of B from A, as follows: D A. .. . B What we’ve shown, then, is that the ⊃-E inference under consideration isn’t needed after all: our proof of B can be converted into one that involves only canonical inferences. In effect, we’ve shown that our ⊃-E inference is justified by showing that it’s eliminable. This process of eliminating a proof ’s noncanonical inferences is known as reduction, and the way to show that a rule sanctioning noncanonical inferences—i.e., a non-Ψ rule—is justified is to demonstrate that there’s a procedure that will allow us to reduce any proof involving those noncanonical inferences to one that doesn’t involve those inferences.15 It’s clear enough why the tonk rules, on this account, aren’t justified. Tonk-I is, of course, self-justifying, but tonk-E isn’t justified at all: it’s simple to show that there’s no reduction procedure that will allow us to eliminate tonk-E inferences from our proofs. Suppose we have a proof of B via a tonk-E inference, where all the inferences leading up to that inference are justified. Then we have something of the form D A A tonk B B where D is a proof of some sentence A. But A may be a sentence of any form whatsoever, and so there’s no guarantee that we’ll prove B in the process of proving A. So tonk-E inferences aren’t in general eliminable, which means we’re not in general justified in inferring according to tonk-E. 15 The characterization given here of Prawitz’s approach to proof-theoretic justification is only the barest sketch—many complications, both technical and philosophical, have been left out of the discussion, not be- cause they aren’t important but because they aren’t relevant to my concerns about the approach. For a thorough review of Prawitz’s work, see Heinrich Wansing 2015. 131 This approach, then, has some promise. For one thing, it gives us the intuitively correct answer about tonk. And for another, it goes some way toward giving us a satisfying account of the justification of our rules: after all, demonstrating that any proof involving inferences sanctioned by non-Ψ rules can be reduced to one involving only inferences sanctioned by Ψ rules certainly is sufficient to show that the non-Ψ rules are epistemically kosher if the Ψ rules are. But what about the Ψ rules themselves? We supposed above that they’re self- justifying. Now we need to examine that supposition. The basic story, on the Gentzen–Prawitz approach, is that I-rules by themselves deter- mine meaning—the E-rules don’t have a role to play here. That’s why the I-rules are in Ψ and the E-rules aren’t. That is, notwithstanding the the presuppositions we’ve been working under, the only rules that can really be meaning-conferring are the I-rules. This approach, then, purports to give us a reason to reject Revised Tolerance: if the only meaning-conferring rules are I-rules, there just isn’t any way to add to our language a word tonk whose meaning- conferring rules are tonk-I and tonk-E, for the simple reason that tonk-E isn’t an I-rule for tonk. As Florian Steinberger (2011: 622) points out, though, “Advocates of this ‘I-rules first’ view rarely supply arguments for their position”. And some argument is needed for the claim that I-rules alone determine meaning. Gentzen does have something to say here. On his view, a (for instance) conditional sen- tence “attests” the existence of a derivation from its antecedent to its consequent, and the most natural interpretation of that claim is that ⌜A ⊃ B⌝ just says that there’s a derivation from A to B. Or, more generally, what any given sentence S says is that there’s a proof that, if we append to it an inference sanctioned by some I-rule, will give us S as a conclusion. But if that’s right, we’re led to the conclusion that what each sentence in our language is about is the existence of a proof of some particular form. This conclusion is problematic, to say the least. For one thing, it forces us to accept a view that’s bizarre and possibly circular—can it really be the case that we’re only ever talking about 132 the existence of proofs? But there’s an even more immediate problem. If what a sentence is about is just the existence of some proof of a particular form, then it’s not clear why there should be any restriction at all on which rules get to be in our proof system. To put the point slightly differently: The question of whether some proof of a particular form exists has an answer only relative to a system of rules, and any system of rules whatsoever will generate answers to questions about what proofs there are in that system. So it’s mysterious why, on this view, we should disallow some systems of rules. What exactly is wrong with sentences about the existence of proofs in, for instance, a system that includes the tonk rules? A view on which sentences are merely about the existence of proofs just doesn’t have the resources to answer this question. Advocates of the Gentzen–Prawitz approach, then, should reject the view that what sen- tences are about is the existence of proofs. But it’s unclear what else they might mean when they say that a sentence “attests” the existence of a proof. Perhaps they mean only that that the correct theory of meaning is verificationist: to un- derstand a sentence is to know how to demonstrate its truth. (Verificationism does seem to be an underlying assumption of this approach.) But if that’s what they mean, two prob- lems arise immediately. The first is bad enough—the constraints endorsed by Gentzen and Prawitz turn out to be hostage to verificationism, which is a highly contentious view. But the second is even worse—even if verification is correct, it doesn’t actually follow that privi- leging I-rules in the way Gentzen and Prawitz do is appropriate. After all, proof is a kind of verification, and as we’ve just seen, what counts as a proof (and so what counts as a verifi- cation) just depends on what our rules of inference are. So, as a general matter, it’s difficult to see how verificationism itself can be the basis of any kind of constraint. More concretely: Even according to verificationism, E-rules can turn out to be self-justifying as long as we allow that every way of proving a sentence S in a language is relevant to giving sense to S in that language—the tonk-language, for instance, will sanction proofs of S that go via tonk-E,which means tonk-E will get to be in Ψ and so will be self-justifying. To rule this 133 out, advocates of the Gentzen–Prawitz approach need to explain why sense is given only by the canonical means of demonstrating a sentence’s truth. And verificationism itself is of no help whatsoever here. Unless some further reason is given for insisting that only canonical rules—i.e., only I-rules—can be in Ψ, privileging these rules in the way Genzten and Prawitz do is unmotivated. If this is right, then we should allow that both I-rules and E-rules can be involved in giving sense to our expressions, which means both I-rules and E-rules can be in Ψ. But this could mean either of two things. We might say that an expression’s I-rules and E-rules both get to be in Ψ. Or we might say, alternatively, that we have a choice—we can let either the I-rules or the E-rules for a given expression be meaning-conferring and so be in Ψ, but whichever choice we make, the other set of rules will have to be justified by the ones in Ψ. If we say that both sets of rules for a given expression get to be in Ψ, though, our expla- nation of the problem with tonk is no longer satisfactory: if tonk-I and tonk-E are both in Ψ, then, by hypothesis, they’re both self-justifying. So, since our goal here is to explain why the tonk rules aren’t justified, this approach fails to serve our purposes. What about the other alternative? Dummett sometimes appears to suggest an approach like this. He discusses two ways of giving the meanings of sentences—“in terms of how we establish them as true” and “in terms of what is involved in accepting them as true”—and then explains that because either fully determines the meaning of a sentence, these two features of the use of a sentence cannot be assigned independently: given either, the other should follow. (1983/1993: 142) We can think of the I-rules for a word as telling us how to establish the truth of certain sentences containing that word, and we can think of the E-rules for a word as telling us what’s involved in accepting the truth of certain sentences containing that word. So what Dummett is telling us here, in effect, is that the I-rules for a word are by themselves sufficient to determine the meaning of that word, and the same goes for the E-rules. Neil Tennant (2005: 627–628) makes this idea explicit: 134 Any introduction rule, taken on its own, succeeds in conferring on its featured connective a precise logical sense. That sense in turn dictates what the corre- sponding elimination rule must be. Mutatis mutandis, any elimination rule, tak- en on its own, succeeds in conferring on its featured connective a precise logi- cal sense. That sense in turn dictates what the corresponding introduction rule must be. If something like this is right, then we can understand as follows why we might not be jus- tified in reasoning in accordance with certain of our conventional rules of inference. When we try to add a word to our language whose rules aren’t harmonious, we fail to do so. In- stead, we end up adding two words that are merely homophones: one whose meaning is that determined by the I-rules and another whose meaning is that determined by the E-rules. So any argument that uses one of these I-rules followed by one of these E-rules contains an equivocation. Suppose, for instance, that I add the tonk rules to my language. Then, since, tonk-I is sufficient to determine a meaning for tonk, I’ve got a legitimate word tonk1 whose mean- ing is that determined by tonk-I. And similarly, I’ve got a legitimate word tonk2 whose meaning is that determined by tonk-E. But tonk-I and tonk-E aren’t harmonious, which means tonk1 and tonk2 can’t have the same meaning. And that helps us to explain why the tonk rules, if we try to use them as two rules governing a single word, aren’t justified: When I apply tonk-I to some sentence A, what I end up with is some sentence ⌜A tonk1 B⌝. But the word that tonk-E allows me to eliminate is tonk2 , not tonk1 . So if I apply tonk-E to ⌜A tonk1 B⌝, I’m making the same sort of mistake I’d make if I inferred “My money is next to a river” from “My money is in a bank”. And it’s clear that patterns of reasoning that rely on mistakes of this sort aren’t justified. The problem with this diagnosis, though, is that we’ve been given no reason to accept the claim on which it relies: that I-rules and E-rules (when they’re not harmonious) can’t both be meaning-conferring for a single word. We’ve already seen that the tonk rules, for instance, can be added to the language in such a way that they give tonk a single inferential role. So, if our constraint here is to be principled and not arbitrary, we need to be able to 135 explain why it is that, if some I- and E-rules aren’t in harmony, the inferential role given by those rules will inevitably fail to be associated with any single meaning. And no reason has been given to think that inferentialists can provide any such explanation. The more general point is that, if we want the tonk rules to turn out not to be justified, then we can’t accept both the thesis that there are fundamental rules that are self-justifying and the thesis that a rule is fundamental just in case it’s involved in giving sense to our expressions; since (as far as we’ve been able to tell) both tonk-I and tonk-E are involved in giving (a single) sense to tonk, those two theses together entail that both of the tonk rules are justified. So there are two options: provide a different criterion for fundamentality or abandon the thesis that any of our rules are self-justifying. But as far as I’m aware, no other principled criterion for fundamentality has ever been given. In fact, it’s hard to imagine what such a criterion could even appeal to if not a rule’s role in giving sense to our expressions, and so it appears that no such criterion is forthcoming. So inferentialists should give up the claim that there’s a set Ψ of self-justifying rules that serve as the justificatory foundation for the rest of our conventional rules of inference. The only option left, then, if inferentialists are to avoid the conclusion that all of our conventional rules of inference are meaning-conferring and so self-justifying, is the thesis that rules are somehow justified by structural features of the proof system in which they’re contained. So that’s the thesis we need to consider next. Harmony and conservative extension The standard move here, for authors working in the tradition of proof-theoretic semantics, is to agree that there’s not any privileged set of rules that require no justification whatsoever— what justifies a word’s I- and E-rules, they insist, is the way those rules fit together. Milne’s (1994: 56) suggestion, for instance, is that, since “both its introduction and elimination rules are needed to confer meaning on a logical constant”, those rules “should be well matched in the way that justification demands”. 136 It’s important to notice that assertions of this kind are normative—they’re claims about the way a language ought to be. So we can ask about the source of this normativity. And this is a question that needs answering: since harmony, on the sort of view in question, is supposed to be the source of the epistemic justification of our rules, it had better not be the case that the harmony constraint is arbitrary. There needs to be some principled reason for disallowing rules that fail to fit together in the proper way, whatever way that is. What, then, can authors in this tradition say about the source of the harmony constraint? It may be helpful here to step back and think about why, intuitively, certain words are unacceptable. We know that the trivializing connective tonk leads to disaster, but what’s the problem with words like flurg and Boche? The answer seems to be that their rules make proof too easy: we can, merely by introducing those rules into our language, prove sentences that would have been very difficult (in the case of flurg) or impossible (in the case of Boche) to prove beforehand. In particular, we can prove such sentences that were already in our language before we added the new vocabulary. For instance, we can use the Boche rules to prove (a formalization of) “All Germans are cruel”, as follows: [x is a German](1) by Boche-I x is a Boche by Boche-E x is cruel by ⊃-I: (1) x is a German ⊃ x is cruel by ∀-I ∀x(x is a German ⊃ x is cruel) This result is, at best, very odd. Before we added Boche to our language, the sentence “All Germans are cruel” wasn’t provable in four steps from the rules of our language—it was a (false) empirical claim. There seems to be something wrong with vocabulary that changes the language in this way. We can formulate this idea more precisely. Start with the following definition: Let L be some language, and let L′ be a language consisting of L plus some new vocabulary, such that any sentence of L that can be derived from premises in L via the rules of L′ can also be derived from those premises via the rules of L alone. Then L′ is a conservative extension of 137 L. The basic idea, then, is that the way to show that new vocabulary is legitimate is to show that it conservatively extends our language. The thought that constraints on admissibility have something to do with conservative extension is certainly popular among authors in the proof-theoretic tradition. According to Dummett (1991: 290), for instance, “the point of harmony” is to ensure that “for every logical constant, its addition to the fragment of the language containing only the other logical constants should produce a conservative extension of that fragment”. Conservative extension is also important for Steinberger (2011: 619)—he characterizes harmony as a way of meeting a requirement he calls the “principle of innocence”, which says that “it should not be possible, solely by engaging in deductive logical reasoning, to discover hitherto unknown (atomic) truths that we would have been incapable of discovering independently of logic”. Or, in other words: our logical vocabulary ought to conservatively extend the nonlogical portion of our language. And Robert Brandom (2000: 68–69), too, takes it to be the case that “it must be a criterion of adequacy for introducing logical vocabulary that no new inferences involving only the old vocabulary be made appropriate thereby”.16 Note that nothing I’ve said to this point can tell us why harmony is the source of the epistemic justification of our rules—that question still needs answering. But if it’s right that proposed harmony constraints are to be understood as attempts to ensure that extensions of our language are conservative, then we’ve at least made some progress: we’ve clarified that the actual question that needs answering is why conservativeness is the source of the epistemic 16 In addition, Hartry Field (1980) introduces a variant notion of conservativeness according to which (set- ting aside some complications) new rules that allow us to prove sentences of the old language that weren’t provable before can still conservatively extend the language as long as those newly provable sentences would not be provable were the domains of their quantifiers restricted to objects other than those introduced by the new rules. Crispin Wright (1999: 15–16), who deploys a version of Field’s notion in a discussion of the putative analyticity of Hume’s Principle (construed as a contextual definition of number), provides a more intuitive characterization: an extension is conservative when it “results in no new theorems about the old ontology”. Consider: Hume’s Principle, when added to second-order logic, allows us to prove that the natural numbers exist, which means it allows us to prove that there are infinitely many objects. And “There are infinitely many objects” isn’t provable in second-order logic alone. But that doesn’t mean Hume’s Principle fails to conser- vatively extend (in Field’s sense) second-order logic, for it still may be the case that the restricted-quantifier sentence “There are infinitely many non-number objects” is not provable in second-order logic with Hume’s Principle added. 138 justification of our rules. So what can be said about that question? The obvious place to start here is to consider what authors who endorse conservative- ness as a constraint on admissibility tend to say in defense of that constraint. Belnap (1962) was the first to propose such a constraint, and his reason for doing so was to avoid certain followability conflicts and thereby to ensure that the rules governing new words successfully determine inferential roles for those words. But as we saw in §3, this isn’t a good reason: be- cause of the possibility of making accommodating adjustments, followability conflicts can’t ground constraints on admissibility in the way Belnap requires. So we should move on and consider other reasons one might have for ruling out nonconservative extensions. One thing thing sometimes said by authors who endorse conservativeness as a constraint is that nonconservative extensions of a language are to be avoided because they would change the meanings of expressions already in the language. Steinberger (2011: 619–620), for in- stance, takes his conservativeness requirement to be equivalent to the maxim that “our prin- ciples of logic ought not to tinker with the meanings of non-logical expressions”. The general worry seems to be that, unless additions to the language are conservative, the meanings of expressions will be subject to a problematic sort of instability: it will be possible to change an expression’s meaning just by adding words to our language. So: would the admissibility of nonconservative vocabulary entail this sort of instability? On some views of meaning, it wouldn’t. According to Cesare Cozzo’s (1994) theory of meaning based on “immediate argumental role”, for instance, the meaning of an expression is just given by the meaning-conferring rules for the words that appear in that expression. And we can always introduce new words into our language without changing which rules are meaning-conferring for our old words.17 So meanings, on Cozzo’s view, remain stable when new vocabulary is added—regardless of whether that new vocabulary extends the language conservatively. 17 It’s true that, when a new word is introduced into the language, words of the old vocabulary will inevitably appear in the rules governing that new word. But if we say—as seems reasonable—that the new word presup- poses the old words, then the newly introduced rules won’t count as meaning-conferring for the old words. (See the discussion of Cozzo in Chapter 3 above.) 139 Even on a view like Cozzo’s, though, it’s clear that there’s something important that doesn’t remain stable when nonconservative vocabulary is added: truth conditions (or, more particularly, the relationship between observation and the truth values of our sentences). As we’ve seen, for instance, the addition of Boche to our language would transform “All Germans are cruel” from an empirical falsehood into (something like) a logical truth, and the addition of tonk to our language would transform every sentence of the language into (something like) a logical truth. So we can ask: is there a problem with this sort of instability? There will, of course, usually be practical reasons to avoid expressions that change the truth conditions of sentences already in the language. (One such reason is to ensure that speakers who have adopted new expressions will be able to communicate easily with those who haven’t. Another, as Warren (2015c: 22) notes, is to ensure that new expressions will “preserve[] our old reasons for accepting our old sentences”, so that we won’t need to “start over epistemically”.18 ) But again, practical reasons aren’t what we need—we already know that we have practical reasons to avoid tonk and Boche. Our purpose here is to find a strategy for explaining why conservativeness is a source of epistemic justification, and so the question we need to answer isn’t whether instability of truth conditions is practically bad but whether it results in rules that are epistemically unjustified. And we’ve been given no reason to think it does. Admittedly, it will be the case that a sentence S whose truth conditions are altered by a nonconservative extension has a different inferential role in the new language than it had in the old language: acceptance of S sanctions (or is sanctioned by) acceptance of different sen- tences in the new language than in the old. But that just means the new language is different from the old one. It doesn’t provide us with any motivation to abandon Revised Tolerance, since we haven’t been given any reason to conclude that the new language isn’t really a pos- sible language, nor does it provide us with any motivation to abandon The Link, since we 18 Warren, it should be noted, isn’t directly concerned with truth conditions—he’s trying to show that there are practical reasons to avoid expressions that change our language in such a way that we can’t translate homo- phonically from the old language into the new one. But it’s evident that Warren’s constraint is closely related to the one under discussion here. 140 haven’t been given any reason to conclude that the rules of this new language aren’t justified. Stability, then, just doesn’t seem to have anything to do with the epistemic justification of our rules, which means that, if conservativeness is the source of that justification, it must be for some reason unrelated to stability. We need to keep exploring. The next defense of conservativeness I want to consider, then, is Brandom’s. For him, the conservativeness requirement is motivated in part by his expressive account of logic, ac- cording to which the logical constants’ distinctive role in our language is to allow explicit object-language expression of our inferential commitments—logic’s job, according to Bran- dom (2000: 30), is to “help us make explicit (and hence available for criticism and transfor- mation) the inferential commitments that govern the use of all our vocabulary, and hence articulate the contents of all our concepts”. The conditional, for instance, is to be understood on this sort of view as a piece of vocabulary that allows us to express, without ascending to a metalanguage, commitment to a particular material inference: we can use a conditional locution to “say, as part of the content of a claim (something that can serve as a premise and conclusion in inference), that a certain inference is acceptable” (2000: 60). Brandom (2000: 68–69) argues by appeal to this account of logical vocabulary that a conservativeness constraint is needed not only to avoid horrible consequences but also because otherwise logical vocabulary cannot perform its expressive function. Unless the introduc- tion and elimination rules are inferentially conservative, the introduction of the new vocabulary licenses new material inferences…. So if logical vocabulary is to play its distinctive expressive role of making explicit the original material in- ferences…it must be a criterion of adequacy for introducing logical vocabulary that no new inferences involving only the old vocabulary be made appropriate thereby. The point here is simply that, if the role of logical vocabulary is to allow us to explicitly express our commitment to particular material inferences, it had better not be the case that a new logical constant licenses material inferences other than those. If it did, it just wouldn’t be doing its job as a logical constant. 141 Brandom’s approach, though, isn’t suitable for our purposes, for what he’s provided is a necessary condition for logicality, not a necessary condition for epistemic justification. Even on the supposition that his expressive account of logical vocabulary is correct, all that can be established is that tonk isn’t a piece of logical vocabulary. The further conclusion we’re look- ing for—that tonk’s I- and E-rules aren’t justified—doesn’t follow from that thesis. To get the desired conclusion, we’d need a stronger constraint according to which conservativeness is necessary for the justification even of nonlogical rules, and Brandom hasn’t given us any way to motivate such a constraint. In fact, Brandom explicitly rejects “the suggestion…that infer- ential conservatism is a necessary condition of a ‘harmonious’ concept” (2000: 73), claiming to the contrary that, “outside of logic”, it’s “no bad thing” for an expression to nonconserva- tively extend the language (2000: 71). So Brandom’s account doesn’t rule out the possibility that tonk-I and tonk-E are justified inference rules that confer meaning on a legitimate (though nonlogical) piece of vocabulary. To be clear: what I’ve said here doesn’t constitute an objection to Brandom’s account.19 The point is just that the account, whatever its merits, doesn’t have the right sort of struc- ture to explain why harmony—construed in terms of conservative extension—is the source of the epistemic justification of our rules (and thereby to give inferentialists a principled way of rejecting either Revised Tolerance or The Link). And since our goal in the present discussion is to explore inferentialists’ prospects for giving such an explanation, we should set Brandom’s approach aside and move on. Which brings us to the last strategy we’ll consider here: that developed by Dummett. Though Dummett’s approach does in the end connect to our examination of conservative 19 That’s not to say I take his approach to be completely free of problems. Take, for instance, his reason for allowing nonlogical vocabulary to nonconservatively extend the language: the nonconservativeness of an expression, for Brandom (2000: 71), “just shows that it has a substantive content”, the idea being that the intro- duction of new substantive commitments into our language is a normal and unproblematic part of “conceptual progress in science”. This idea, surprisingly, has been endorsed both by Steinberger (2011) and by Schechter and Enoch, who claim that “possessing [nonconservative concepts] allows us to develop increasingly powerful sets of beliefs about the world” (2006: 694). But we know from the work of Frank Ramsey (1931) and Car- nap (1958/1975) that it’s always possible to introduce new nonlogical expressions in a way that keeps their meanings separate from any substantive theoretical commitments. It’s not clear to me why Brandom doesn’t consider this. 142 extension, it does so in a way that’s not particularly straightforward, and so we’ll need to discuss the approach at some length. Conservative extension and holism As I noted in §1, Dummett (1975/1978b: 218) discusses but doesn’t endorse the tempting idea that an inferentialist theory of meaning might allow us to account for the justification of any meaning-constituting inferences we like on the grounds that “we surely have the right to make our statements mean whatever we choose that they shall mean”. The reason Dummett gives for rejecting this idea is that it “can, ultimately, be supported only by the adoption of a holistic view of language” according to which the meaning of any statement “simply consists in the place which it occupies in the complicated network which constitutes the totality of our linguistic practices” (1975/1978b: 218). To clarify: for Dummett, the claim that there aren’t admissibility constraints is compatible only with a holistic conception of language, and so, since holism must be rejected, that claim must be rejected as well. Our first question, then, is what this purportedly unacceptable holism amounts to. This question isn’t trivial, for Dummett is often less than clear about what exactly he takes holism to be—Susan Haack (1982) points out, for instance, that, in different places in his 1973/1978a, he characterizes holism in three nonequivalent ways. But he’s fairly consistent (see, e.g., his 1973/1978a: 302, his 1975/1978b: 219, and his 1991: 221–225) in his insistence that a holistic conception of language is one on which we’re committed to the view that a speaker must understand an entire language in order to understand any sentence of that language. So I’ll suppose that the sort of holism Dummett is concerned to reject is at least roughly equivalent to this view. Call this sort of holistic view understanding holism. Now, it’s clear enough why one might find understanding holism unacceptable: given a conception of language on which one can’t understand any sentence of a language without understanding the entire language, it’s hard to see how to account for some central facts about how language functions, such as the fact that languages can be learned incrementally 143 and the fact that speakers who don’t have exactly the same vocabulary manage to successfully communicate with each other. This isn’t to say that a theorist friendly to understanding holism couldn’t meet this chal- lenge—one might try to explain the second fact, for instance, by allowing that the same sen- tence means different things to speakers with different vocabularies and then insisting that similarity rather than identity of meaning is what underlies successful communication. But we need not wade any further into these issues here. We can just assume for the sake of ar- gument that understanding holism really is unacceptable and move on to our next question: would we be committed to that view if we claimed there were no constraints on admissibil- ity? On certain views of meaning, the answer is that we wouldn’t. Take, for instance, the Cozzo-style view described above, on which one can understand an expression just by know- ing the meaning-conferring rules for the words in that expression. On such a conception, as long as we know the meaning-conferring rules for all the words in a given sentence, we can understand that sentence—we need not know the meaning-conferring rules for all the words in the language.20 And this is true regardless of whether there are constraints on those meaning-conferring rules’ admissibility. So, if the absence of admissibility constraints really commits us to understanding holism, it must be because there’s independent reason to reject any view of meaning like Cozzo’s. And Dummett does take there to be such reason. In fact, he seems to find it incredible that anyone might think otherwise: “No one”, he insists, ”could think that the grasp of the meaning of an arbitrary sentence consisted solely in a knowledge of the ways in which it might figure in an inference” (2000: 251–252). He’s committed instead to a conception of meaning on which to understand a (declarative) sentence is to know in what circumstances an assertive utterance of that sentence is correct (i.e., in what circumstances that sentence is 20 As Cozzo (1994: 103) explains, his view is one on which, “in order to understand an expression E…it is not necessary to understand all the words of ” E’s language, “but only the words presupposed by the words occurring in E”. 144 true): The content of an assertion is taken as determined by the condition for it to be correct, and this in turn is identified with the condition for the sentence to be true: just as we know what bet has been made when we know when the bettor wins and when he loses, so we know what has been asserted when we know in what case the assertion is correct. (1991: 166) Dummett, in other words, is committed to the view that understanding is knowledge of truth conditions. And that means he’s committed to the view that understanding holism is equivalent to what we might call truth condition holism, according to which a speaker must understand an entire language in order to know in what circumstances any sentence of that language is true. (Note that this is a substantive view: on a Cozzo-style conception of language, for instance, understanding holism and truth condition holism come apart.) Dummett’s commitment to the equivalence of these two sorts of holism is, I take it, the basis of his claim that the absence of admissibility constraints would lead to understand- ing holism, since it’s straightforward to show that the absence of admissibility constraints would indeed commit us to truth condition holism. To see how this works, consider (for in- stance) the sentence “All Germans are cruel”. That sentence, in our language, is true in just those circumstances in which every person of German nationality exhibits the property of cruelty. Suppose, though, that there are no constraints on admissibility, so that we can un- problematically add any word we like to our language, and suppose that we choose to add Boche. Then “All Germans are cruel” turns out to be provable, no matter what the circum- stances. Now, since the addition of Boche is (we’re supposing) unproblematic, it turns out that, given any adequate theory of truth for our new language, the sentence “All Germans are cruel” is true in that new language no matter what the world is like. And similarly, if we add tonk to the language, then every sentence of the language turns out to be provable, and so every sentence of the language—regardless of what its truth conditions were before tonk was added—is true no matter the circumstances. The point is just that the circumstances in which a sentence of the language is true are going to be sensitive to the presence or absence 145 in the language of disruptive words like tonk and Boche. So, if such words are admissible, it won’t be possible to know a particular sentence’s truth conditions without knowing the entire language, for knowing the entire language will be the only way to know whether the language contains any such disruptive words. The absence of admissibility constraints, then, does commit us to truth condition holism, which means that, if Dummett is right to reject that view, then he’s also right to insist on admissibility constraints. Remember, though, that his reason for rejecting truth condition holism is just that he takes it to be equivalent to understanding holism, which (we’re suppos- ing) he’s right to reject. And his reason for taking those two sorts of holism to be equivalent is that he’s committed to an understanding of language on which one understands a (declar- ative) sentence just when one knows in what circumstances it’s true, which means that his insistence on admissibility constraints is reasonable only if this understanding of language is well motivated. So we need to examine his reasons for understanding language in this way. Though Dummett does appear to have an argument for his understanding of language, he never presents it in any form that makes its structure transparent, and so it’s not always clear what’s supposed to follow from what. But as I interpret him, his reasoning can be sketched as follows. The primary use to which language is put is the transmission of information from speaker to listener, which is accomplished via assertion: the “practice of speaking the language” in- volves “many linguistic modes of which assertion is the most central” (1991: 81). When the speaker utters a sentence assertively, the listener grasps the relevant information—i.e., understands what’s been asserted—just when she knows in what circumstances the asser- tion counts as correct (i.e., true). Now, since the primary use of language is assertion, it’s of central importance, for understanding a language, that one know in what circumstances as- sertive utterances of sentences of the language are correct, which is to say that it’s of central importance, for understanding a language, that one know in what circumstances (declar- ative) sentences of the language are true. And since “the meaning of an expression is the 146 content of that knowledge possessed by the speakers which constitutes their understanding of it” (1991: 83), it’s of central importance, for knowing the meanings of the expressions of a language, that one know in what circumstances (declarative) sentences of the language are true: truth is “the central notion” of any adequate account of meaning (1991: 163). We can conclude, then, that a speaker understands a (declarative) sentence just when she knows in what circumstances it’s true. So: is this a good argument? Consider first the claim that the notion of truth will be central in any adequate account of meaning, since the primary use of language is the trans- mission of information via assertion. This claim is certainly not universally accepted, as Dummett knows—see, e.g., his discussion of Wittgenstein’s conception on which linguis- tic meaning is to be explained “in some quite different way which [does] not make use of the concept of truth at all” (1991: 163). But we can take the claim for granted in this con- text since inferentialists are explicitly interested in that part of the language used for making truth-apt claims. But even if we accept that any adequate account of meaning will have truth as its central notion, it doesn’t follow that the meanings of all the declarative sentences of the language must be truth-conditional. In particular, the centrality of the notion of truth doesn’t by itself give us any reason to reject what John Burgess (1984/2008: 267–268) calls a “dualist” view of language, on which certain sentences of a language—the “primary” sentences—are to be understood truth-conditionally, while other “secondary” sentences are to be understood “merely as intra-linguistic instruments for deducing primary sentences”. As long as we take seriously the idea that the primary sentences are the ones we’re really concerned about, with the secondary sentences merely serving to allow us to move among primary sentences, it’s clear that this view of language is still one that has truth as its central notion. The question we need to answer, then, is whether there’s independent reason to reject such a view. I want to focus here on a particular version of the view in question, one modeled on Quine’s celebrated picture of total science as a “field of force whose boundary conditions 147 are experience” (1951: 39). On the sort of view I have in mind, the primary sentences are observation sentences, and other declarative sentences are secondary. And because our in- ferentialist framework covers language entry and departure, we can make sense of this view within that framework: for a sentence S to be an observation sentence is for the rules of the language to require a speaker both to accept S whenever she’s having an experience of a certain sort and to take herself to be having an experience of that sort whenever she ac- cepts S. (What it is, exactly, to take oneself to be having an experience of a particular sort will be addressed below.) For a concrete example, consider the sentence “I’m having a red experience”—plausibly,to understand that sentence is to be disposed, as a matter of conven- tion, to accept that sentence when one is having a red experience and to take oneself to be having a red experience when one comes (perhaps by deduction) to accept that sentence. On any adequate theory of truth, then, an observation sentence S will count as true in just those circumstances in which the speaker is having an experience with character c, where experiences with character c are the experiences that figure in the rules governing S. So there can be no substantial gap between understanding S and being able to pick out the circum- stances in which it’s true. To understand S, then, is to know its truth conditions, which is why S counts as primary. Dummett (1975/1978b: 218–219) is of course aware of the possibility of such a view: Frequently…a holistic view is modified to the extent of admitting a class of ob- servation statements which can be regarded as more or less directly registering our immediate experience, and hence as each carrying a determinate individual content. …To these peripheral sentences, meanings may be ascribed in a more or less straightforward manner, in terms of the observational stimuli which prompt assent to and dissent from them. No comparable model of meaning is available for the sentences which lie further towards the interior of the struc- ture: an understanding of them consists solely in a grasp of their place in the structure as a whole and their interaction with its other constituent sentences. So what reason does he have for rejecting this view? As Burgess (1984/2008: 273) interprets him, his only real reason, in the end, is that he “values representationality highly for its own sake” and so is hostile to any view on which 148 some sentences’ meanings are to be understood merely in terms of their use in inference. This seems accurate—Dummett, in scattered places in his 1991, alludes to the “risk” that a holistic view might collapse into “formalism” or “instrumentalism” as though such a result would obviously be unpalatable. The only motivation he offers, as far as I can tell, is the claim that the appeal of a view of meaning like the one I’ve been describing is grounded in an undue “pessimism” about the feasibility of devising a better (i.e., representational) account, since, on the sort of view I’ve been describing, “we are inexorably condemned to speak…without knowing exactly what we mean, because the very conception of knowing what we mean, in this sense, is…a mere illusion” (1991: 241–242). But that claim rests on a presupposition that a conception on which we know what we mean by knowing a sentence’s truth conditions is to be preferred to a view on which we know what we mean just by knowing how a sentence is to be used. That is, the claim presupposes that a representational understanding of language is to be preferred to an instrumentalist understanding and so can’t do anything to motivate that preference. If Dummett can’t independently motivate his anti-instrumentalism—and it doesn’t seem like he can—then what he’s offering is no more than an aesthetic inclination. So we should conclude that, even given a conception of language on which any adequate account of mean- ing will have truth as its central notion, we need not accept the thesis that a speaker under- stands a (declarative) sentence just when she knows the circumstances in which it’s true. And that thesis, recall, is the basis of Dummett’s rejection of the knowledge-of-truth-conditions thesis. So Dummett hasn’t established that the knowledge-of-truth-conditions thesis really must be rejected, which means he hasn’t established that an absence of admissibility con- straints would commit us to an unacceptable holism. He has, though, laid the groundwork for a different, better argument for admissibility constraints. Consider: The meanings of our words, and so of our sentences, are in a certain sense arbitrary, or conventional. We can make choices about what our words and sentences will mean and so can, in a way, decide what the inferential relationships between them will 149 be. And this is true, to a degree, even if there are constraints on admissibility. But there’s no such arbitrariness in the relationships between experiences. It isn’t possible, for instance, to be having both a red-all-over experience and a green-all-over experience, or to be having a scarlet experience without thereby having a red experience. In short, there’s something like a logic of experience, and it’s not up to us to decide what that logic is to be. Furthermore, this is a feature of experience of which speakers are aware, at least in the minimal sense that we can discriminate between experiences of different sorts. We’re able, for example, to tell the difference between having a red-all-over experience and having a green- all-over experience. And that means that, if a speaker is having a red-all-over experience, then she’s aware that the sense experience she’s having is not a green-all-over experience. (This formulation may be misleading, so I want to clarify that, when I say that the speaker here is aware that the experience she’s having isn’t a green-all-over experience, I don’t mean to be attributing to her any sort of propositional attitude. I’m attributing to her only an ability to discriminate the experiential state she’s in from green-all-over experiential states, so that she can be disposed to do different things when she’s having a red-all-over experience than she does when she’s having a green-all-over experience.) When a speaker is in this sort of state of awareness of a particular kind of experience, we can say that she’s taking herself to be having that kind of experience. These takings, then, are themselves bound by the logic of experience: it’s not possible, for instance, for a speaker who takes herself to be having a red- all-over experience to also take herself to be having a green-all-over experience, since such a speaker is going to be aware that the sense experience she’s having is not a green-all-over experience. Now, if language departure rules issue in these sorts of takings, then, in the case of any language that includes observation sentences, the acceptability of our rules will be con- strained by the logic of experience. Dummett (1975/1978b: 220–221) explains: One aspect of the use of observation statements lies in the propensities we have acquired to assent to and dissent from them under certain types of stimuli; an- other lies in the possibility of deducing them by means of non-observational 150 statements, including highly theoretical ones. If the linguistic system as a whole is to be coherent, there must be harmony between these two aspects: it must not be possible to deduce observation statements from which the perceptual stimuli require dissent. The idea is just that, if the rules of the language require a speaker in a particular experien- tial state to reason her way to taking herself to be having an experience incompatible with that state—if, say, they require her, on the basis of having a red-all-over experience, to take herself to be having a green-all-over experience—then those rules are unacceptable. Here’s another (subtler) example: if the rules require a speaker, on the basis of having a metallic taste experience, to take herself to be having a red-all-over experience, then there are possi- ble experiential states—i.e., ones in which she’s having a metallic taste experience while not having a red-all-over visual experience—such that, if she’s one of those states, the rules re- quire her to take herself to be having an experience incompatible with the state she’s in, and so those rules, by our constraint, are unacceptable. (This latter claim may seem questionable. In fact, it’s unproblematic, for reasons I’ll explain below.) At this point, we’ve finally circled back around to the idea of conservative extension, for our constraint here, as Dummett (1975/1978b: 221) notes, is really “a demand that, in a certain sense, the language as a whole be a conservative extension of that fragment of the language containing only observation statements”. More precisely: Let A be an observation sentence whose corresponding experiences are those with character a, and let B be an ob- servation sentence whose corresponding experiences are those with character b. Then our requirement is that the rules of the language must be such that, if it’s possible to deduce B from A, then the logic of experience dictates that it’s not possible to take oneself to be hav- ing an experience with character a without also taking oneself to be having an experience with character b. Suppose, for instance, that in some language “I’m having a red experience” and “I’m having a scarlet experience” are observation sentences connected to the same ex- periences to which those sentences connected in our actual language. Then it’s acceptable for the rules of that language to require us to infer “I’m having a red experience” from “I’m 151 having a scarlet experience”, but if those rules require us to infer “I’m having a scarlet expe- rience” from “I’m having a red experience”, they’re unacceptable. It’s simple to explain why this constraint rules out (for instance) tonk. If the tonk rules were added to our language, we’d be required to accept all the language’s observation sen- tences and so would be required to take ourselves to be having all the corresponding experi- ences. And since those experiences are of course not all compatible with each other, it’s not possible for any speaker, regardless of her experiential state, to take herself to be having all those experiences. So adding the tonk rules to our language would render it unacceptable by our constraint. Now, recall that that our purpose, in discussing conservativeness constraints, has been to explore inferentialist-friendly strategies for explaining why harmony, construed in terms of conservative extension, is the source of the epistemic justification of our rules. So: have we found a workable strategy here? Or, to put the question another way: is our reason for requiring the rules of our language to meet the conservativeness constraint that rules not meeting the constraint would be epistemically unjustified? This way of understanding the motivation for the conservativeness constraint is tempt- ing, to be sure, for it’s natural to understand conservativeness, in this context, as providing a guarantee of soundness. An observation sentence counts as true just when the speaker is having an experience of the right sort, and so we can think of conservativeness as a way of ensuring that speakers won’t be able to deduce observation sentences without having the corresponding experiences—i.e., won’t be able to deduce false observation sentences. This seems to be Burgess’s (1984/2008: 272) understanding: he characterizes conservativeness as a way of guaranteeing that our rules will preserve correctness “in all instances where the no- tion of correctness is applicable, that is, in all instances where the premises and conclusion are all primary sentences”. And Dummett, too, seems to be thinking of conservativeness in terms of its ability to provide epistemic justification; just before he presents the conservative- ness constraint, he asks the following question: 152 With what right do we feel an assurance that the observation statements de- duced with the help of complex theories, mathematical, scientific and other- wise, embedded in the interior of the total linguistic structure, are true, when these observation statements are interpreted in terms of their stimulus mean- ings? (1975/1978b: 220), Nevertheless, I don’t think this is the right way to think about our constraint. The ques- tion of whether our rules are epistemically justified, after all, is just the question of whether speakers are justified in inferring according to those rules. But that question is irrelevant in this context: it’s impossible for speakers to infer according to rules that fail to meet our conservativeness constraint. To put the point slightly differently, a guarantee of soundness can be a source of justification because, given soundness, we can be certain that our rules of inference won’t lead us astray. But here, there’s no actual danger of being led astray. After all, we’re aware of our experiential states, and so, since our takings are bound by the logic of experience, a speaker can’t possibly be aware that she’s in a particular experiential state and at the same time take herself to be having an experience incompatible with that state— regardless of what the rules direct her to do. A speaker’s justification for taking herself to be having a particular sort of experience just isn’t in question here, which means that the motivation for our conservativeness constraint must lie elsewhere. But where? Given everything I’ve said to this point, the answer is clear: if our rules didn’t meet that constraint, they’d require speakers to take themselves to be having incompatible sense ex- periences, and this is something we can’t possibly do. In other words, rules not meeting our conservativeness constraint are unfollowable. And we saw in §3 that unfollowable rules fail to determine genuine inferential roles for the expressions of the language. That’s why rules that don’t meet our conservativeness constraint are unacceptable: it can’t possibly be a convention in our linguistic community to follow rules that are unfollowable. Again, the conventions in question are implicit, which means, plausibly, that what it is for a linguistic convention to be in place is for the linguistic behavior of the members of the community to be governed by some particular sort of complex of dispositions. And as we’ve seen, there’s no sense in the claim that we’re disposed to follow rules that are literally impossible to follow— 153 to make such a claim can only be to misdescribe the situation. (The fact that conventions are dispositional can also be used to demonstrate the truth of my claim that the rules of our language can’t require a speaker, on the basis of having a metallic taste experience, to take herself to be having a red-all-over visual experience. There’s no sense, after all, in the claim that a speaker is disposed to follow a rule requiring her, in every case in which she’s having a metallic taste experience, to take herself to be having a red-all-over visual experience— any case in which she has a metallic taste experience while not having a red-all-over visual experience will be a case in which she doesn’t follow that rule.) If all that’s right, then our constraint has nothing whatsoever to do with epistemic justifi- cation. And that means it can’t give us a way for inferentialists to reject The Link. Instead, it gives us an explanation of why certain sets of rules can’t genuinely be added to our language, and so we might be tempted to think it gives us a way to reject Revised Tolerance. Recall, though, that we also saw in §3 that followability conflicts can be resolved by means of accommodating adjustments, which means that tonk’s inadmissibility doesn’t fol- low from the fact that the addition of the tonk rules to our language would cause a followa- bility conflict—all that follows is that we can admit tonk only if we make accommodating adjustments elsewhere. And it’s possible to make the needed adjustments, though those ad- justments will in this case be quite radical. What’s needed is a set of rules that, even if the tonk rules were added, wouldn’t require us to take ourselves to be having incompatible ex- periences. So, since the tonk rules would allow us to prove every sentence of the language, we’d need to remove from the language any rules that tell us to take ourselves to be having a particular sort of experience on the basis of accepting a particular sentence. That is, in or- der to admit tonk into our language, we’d need to remove every language departure rule connecting sentence acceptance to experience. So the language would become one that we couldn’t use for prediction of experience at all. The resulting language would be useless, of course, but that’s not what’s at issue here. The point is just that it’s a possible language, which means that we still have no way of absolutely 154 disallowing tonk and similar expressions. Dummett’s approach, in the end, is like all the others we’ve discussed: it doesn’t provide us with grounds to reject either The Link or Revised Tolerance. Verdict In this section we’ve been looking at the work of theorists sympathetic to inferentialism to see whether any of them can explain, in proof-theoretic terms, why some of our conventional rules of inference are justified and others aren’t (and so can offer a satisfactory story on which either Revised Tolerance or The Link is false). But despite our efforts, we’ve been unable to find a workable strategy here. We’ve been given no reason, then, to think that inferentialists can provide proof-theoretic constraints on justification—indeed, the fact that inferentialists have been unable to provide a workable strategy is evidence that no such strategy is available at all. So, if no demonstration to the contrary is on offer, we should conclude that, from an inferentialist perspective, there can be no proof-theoretic grounds for abandoning the claim that our conventional rules of inference are self-justifying. This conclusion isn’t particularly surprising as long as we keep in mind that there’s a close relationship between the value of epistemic justification and the desirability of believing the truth. (Disagreement about the nature of justification is widespread, of course, but I take it that this much, at least, is relatively uncontroversial.) The reason we consider epistemic jus- tification to be valuable at all is that we take there to be something good about having true beliefs and avoiding false ones. An adequate theory of epistemic justification, then, will have to respect this fact. And that suggests that the very idea of proof-theoretic constraints on the epistemic justification of meaning-conferring rules must be confused. After all, on our in- ferentialist approach, the rules by themselves, if they successfully confer meaning, just settle certain questions about truth in such a way that those rules are guaranteed always to lead us from truth to truth (either because truth is a metalinguistic notion that’s explicitly under- stood in terms of provability in the language or because truth is a language-internal notion 155 whose rules can deliver (nondefective instances of) the T-schema)—given this, any story on which some sorts of meaning-conferring rules are justified on proof-theoretic grounds and others aren’t is bound to be completely arbitrary. And in that case there can be no principled proof-theoretic reason to abandon The Link. What this suggests is that, if there are any proof-theoretic constraints on justification, it can only be because certain sets of rules fail to be genuinely meaning-conferring at all— i.e., because Revised Tolerance is false. But as we’ve seen, no genuine inferentialist-friendly motivation has ever been given for the claim that a set of rules might not give an expression a genuine place in the language despite giving it an inferential role. If all this is right, then inferentialists are indeed committed to both The Link and Revised Tolerance and so are committed to the thesis that tonk and similar words can legitimately be introduced into our language in such a way that the rules governing those words are justified. So I want to move on to a brief examination of the implications of that conclusion for the project of vindicating our apparently a priori beliefs. 5 Implications for our vindicatory project As I suggested in §1, there’s good and bad news here. The good news is that Revised Tolerance and The Link are the two principles we need in order to complete our vindicatory project: taken together, they entail that, whatever conventional rules we settle on, inferences in ac- cordance with those rules will be both reliable and justified. And the bad news is that this very implication threatens to reduce inferentialism to absurdity. As the case of tonk shows, some sets of rules are trivializing. And our two principles entail that speakers can, by adding a word governed by such a set of rules to their language, transform that language into one in which every sentence is true—and so can come to be epistemically justified in inferring any sentence of the new language from any other. And this result, at least initially, seems absurd; it would be easy to conclude here that the only viable option is to abandon inferen- tialism and so also to abandon altogether the conventionalist strategy for vindicating our 156 apparently a priori beliefs. But again, I don’t think that conclusion is correct. There is a way for inferentialism to remain viable even in the face of expressions like tonk, and we can start to see why that is by reflecting on what it would be to take seriously an approach to language based in Carnapian tolerance. A Carnapian approach Our Revised Tolerance, recall, is just a version of Carnap’s own principle of tolerance, ac- cording to which “we have in every respect complete liberty with regard to the forms of language”, so that “both the form of construction for sentences and the rules of transforma- tion (the latter are usually designated as ‘postulates’ and ‘rules of inference’) may be chosen quite arbitrarily” (1934/1937: xv). And as for The Link, it seems clear that Carnap is also committed to (something like) this principle—one of the central motivations of his pro- gram, after all, is the promise of an empiricist-friendly account of our a priori knowledge of, e.g., logic and mathematics, and so he certainly needs some principle connecting good epis- temic standing with inferences sanctioned by meaning-conferring rules. The inferentialist view we’re considering, then, is quite Carnapian in spirit. Now, Carnap’s approach is infamously permissive, and I suspect that, for many theorists, a view’s reliance on an approach of this sort will be taken as sufficient reason to reject that view outright. (To take just one example, the starting point of a recent paper of Schechter’s (in press) is that it’s intuitively obvious that some rules of inference, if accepted as basic, would sanction inferential steps “too large” to count as justified.) It’s worth emphasizing, though, that the permissiveness of this approach doesn’t prevent us from insisting that some languages are better than others. Revised Tolerance and The Link tell us that tonk can gen- uinely be added to our language and that inferences sanctioned by the rules of this new tonk-language are epistemically justified, but it certainly doesn’t follow that this new lan- guage is worth adopting. Its rules, after all, require us to accept every sentence of the language, 157 which means the language can’t allow us to draw any distinctions and so is (as I suggested above) completely useless for any practical purpose. So, given that languages more useful than the tonk-language are available, we have strong practical reason not to adopt it. It’s clear enough that practical considerations governing language choice are going to take on special importance in the context of a Carnapian approach: as we’ve seen, after all, other grounds for choosing among languages are absent on such an approach. Carnap’s own recognition of this fact is evident—he repeatedly suggests in the Logical Syntax 1934/1937 that whether to adopt a given rule is a choice to be made on the grounds of “expedience”. And he’s even more explicit later, in “Empiricism, Semantics, and Ontology”, where he explains that, when deciding whether to introduce into our language a new framework (i.e., a “system of new ways of speaking, subject to new rules”, that allows us to “speak…about a new kind of entities” (1950/1956: 206)), we have to consider an important question; but it is a practical, not a theoretical question; it is the question of whether or not to accept the new linguistic forms. The acceptance… can only be judged as being more or less expedient, fruitful, conducive to the aim for which the language is intended. Judgments of this kind supply the moti- vation for the decision of accepting or rejecting the kind of entities. (1950/1956: 214). On a Carnapian approach, then, while we can’t say that there’s no such language as the tonk-language or that inferences sanctioned by the tonk-language’s rules are unjustified, we shouldn’t be particularly worried—we can, after all, explain by appeal to practical con- siderations why adopting the tonk-language would be a terrible idea. This by itself, I recognize, isn’t particularly satisfying—we’ve already seen that there are in most circumstances strong practical reasons not to add tonk to our language. What’s more interesting, though, is that, if we direct our attention to certain facts about followability and language departure, we can motivate, again by appeal only to practical considerations, approximations of intuitively correct constraints on admissibility. That is, we can give a quite general explanation, even from an inferentialist perspective, of why we take sets of rules that don’t meet certain conditions to be obviously unacceptable. And to provide such an 158 explanation, I take it, is to show that inferentialism isn’t reduced to absurdity by the tonk problem—i.e., to show that, even given Revised Tolerance and The Link, an inferentialist approach has the resources to provide satisfying answers to questions about the norms of language choice. So I want to close by giving a sketch of this explanation. Practical considerations and language departure Our starting point here is the fact that no language can be of any practical use unless it has some language departure rules. If there aren’t connections to the extralinguistic world—if the rules don’t sanction any moves from accepting particular sentences to doing particular things, such as performing particular actions or predicting experiences of particular sorts— then participating in the language game can be no more than a way to pass the time.21 So, insofar as we want to choose a language with a chance of being useful, we should eliminate from contention any language without departure rules. Furthermore, for any language that does have rules directing thinkers to move from ac- cepting sentences to doing something else, there’s a worry that thinkers may be directed to do incompatible things, in which case the rules will turn out to be unfollowable. (We saw above how this might work in the case of rules directing thinkers to take themselves to be having particular experiences, but the point is general. If, for instance, departure rules direct thinkers to perform particular actions, it might turn out that the rules of the language require a thinker in a particular circumstance both to wave her left hand and to keep perfectly still.) And from these two facts—that departure rules are, pragmatically nonoptional and that they can be a source of followability conflicts—it follows that there are certain conditions such that any language with a chance of being useful is one whose rules fulfill those conditions. That is, though there aren’t constraints on admissibility for languages whose rules are epis- temically justified, there do turn out to be constraints on admissibility for useful languages. 21 Considerations like these seem to be behind Horwich’s denial that states that play no role in motivating action can really count as belief states. The parallel is no accident: though I disagree for the reasons I gave in §3, I think there’s something clearly right about the spirit of Horwich’s view here. 159 In fact, we can be more specific: Any language useful for empirical prediction—i.e., any scientifically useful language—will include language departure rules sanctioning moves be- tween accepting particular sentences and taking oneself to be having particular experiences. So, in order for the rules of a scientific language to be followable, those rules must respect the logic of sense experience, for reasons described in §4, in my discussion of Dummett’s ap- proach. That is, every scientifically useful language is one whose rules conservatively extend the logic of experience. My suggestion, then, is that this sort of consideration of practical reasons is the right way for inferentialists to ground constraints on admissibility. After all, this proposal, though it bears more than a passing resemblance to many of the other proposals we’ve discussed, manages to motivate admissibility constraints without running afoul of Revised Tolerance and The Link and so is compatible with inferentialism. There’s surely a great deal more to be said here, but an account in this general vicinity, I take it, provides a promising way forward— as far as I can tell, the only promising way forward—for inferentialists faced with questions about the admissibility of words like tonk. 6 Concluding remarks In this text we’ve made quite a bit of progress toward a vindication of our apparently a priori logical, mathematical, and Frege-analytic beliefs (i.e., a satisfying, naturalistically respectable account both of these beliefs’ good epistemic standing and of their near-perfect reliability). In Chapter 1 we saw that a conventionalist approach to our vindicatory project is worth con- sidering for the simple reason that no other approach to vindicating the beliefs in question is at all promising. So, in Chapter 2, we began to explore conventionalism’s prospects, particu- larly in the face of the objection from worldly fact, which has almost universally been taken to be decisive. We saw there that this objection does show conventionalists to be commit- ted to the claim that certain facts about the world (such as the fact that all vixens are foxes) obtain by convention, but we also saw that, contrary to popular opinion, this claim, though 160 certainly unorthodox, isn’t obviously untenable—it turns out not to have the absurd impli- cations that are usually taken to the basis for its dismissal. In Chapter 3, then, we set about developing a respectable metasemantic theory on which this claim is indeed true, on which certain facts about he world do indeed obtain by convention; we saw there that a particu- lar sort of inferentialist theory—i.e., a theory that combines a particular sort of inferential role semantics with a particular sort of deflationism about the theoretical role of truth and related notions—just is a theory on which the facts in question obtain by convention. And finally, we saw in the present chapter that an inferentialist theory of this sort can indeed do the vindicatory work we hoped conventionalism might do—and that this fact turns out to be compatible with the claim that there’s something unacceptable about tonk and similar expressions. So we’ve arrived at a naturalistically respectable conventionalist vindication of our ap- parently a priori beliefs that, for all we’ve said to this point, remains viable. But there is, of course, more work to be done—it certainly hasn’t been conclusively demonstrated that the sort of approach developed here is entirely free of problems. Indeed, in developing the approach, I’ve relied without argument on a few assumptions that (to say the least) aren’t en- tirely uncontroversial. These need to be justified if our vindicatory project is to be complete. Some of my future research, then, will be directed toward the following concerns. First, I’ve assumed throughout the text here that, Quinean arguments notwithstanding, the notion of implicit convention can be made naturalistically respectable—that is, that we can find some principled way of drawing a distinction between those rules of inference we obey as a matter of convention and those we obey for other reasons, e.g., merely because we take it to be obvious that they’re truth-preserving. Some explanation of how this distinction can be drawn will need to be given. (As I explained in Chapter 2, this worry isn’t what accounts for conventionalism’s poor reputation—many opponents of conventionalism are themselves committed to a notion of implicit convention. Nevertheless, a full conventionalist theory will have to include an account of implicit convention.) And second, in making my case for the 161 tenability of conventionalism, I’ve largely just assumed that the sort of deflationary approach to truth introduced in Chapter 3 is itself tenable. This will need to be shown. (There are lots of worries to be dealt with here—objections to deflationism are legion—but the most pressing, to my mind, is the one a version of which has sometimes been taken to be fatal for Carnap’s own program: very roughly, that Gödel’s incompleteness theorems just demonstrate that logical truth outstrips proof and so can’t fully be accounted for just by appeal to our rules of inference.) Only when these further tasks have been carried out will a conventionalist approach to vindicating our apparently a priori beliefs have truly been shown to be workable. 162 References Audi, P. 2012. Grounding: Toward a theory of the in-virtue-of relation. Journal of Philosophy, 109, 685–711. Ayer, A. J. 1936a. Language, truth and logic. Victor Gollancz Ltd. Ayer, A. J. 1936b. Truth by convention. Analysis, 4, 17–22. Ayer, A. J. 1946. Language, truth and logic (second ed.). Victor Gollancz Ltd. Belnap, N. D. 1962. Tonk, plonk and plink. Analysis, 22, 130–134. Benacerraf, P. 1973. Mathematical truth. Journal of Philosophy, 70, 661–679. Ben-Menahem, Y. 2006. Conventionalism. Cambridge University Press. Bennett, K. 2009. Composition, colocation, and metaontology. In D. Chalmers, D. Manley, and R. Wasserman (Eds.), Metametaphysics: New essays on the foundations of ontology (pp. 38–76). Oxford University Press. Blackburn, S. 1986. Morals and modals. In G. Macdonald and C. Wright (Eds.), Fact, science and morality: Essays on A. J. Ayer’s Language, truth and logic (pp. 119–141). Basil Blackwell. Block, N. 1998. Semantics, conceptual role. In E. Craig (Ed.), Routledge encyclopedia of philosophy (pp. 652–657). Routledge. Boghossian, P. A. 1996. Analyticity reconsidered. Noûs, 30, 360–391. Boghossian, P. A. 2003a. Blind reasoning. Proceedings of the Aristotelian Society, Supplementary Volume, 77, 225–248. Boghossian, P. A. 2003b. Epistemic analyticity: A defense. Grazer Philosophische Studien, 66, 15–35. Boghossian, P. A. 2014. Philosophy without intuitions? A reply to Cappelen. Analytic Philosophy, 5, 368–381. Boghossian, P. A. 2016. Intuitions and the understanding. In M. A. Fernández Vargas (Ed.), Perfor- mance epistemology: Foundations and applications (pp. 137–151). Oxford University Press. 163 Boghossian, P. A. 2017. Further thoughts about analyticity: 20 years later. In B. Hale, C. Wright, and A. Miller (Eds.), A companion to the philosophy of language (second ed., pp. 611–618). Wiley Blackwell. Bolzano, B. 1972. Theory of science: Attempt at a detailed and in the main novel exposition of logic with constant attention to earlier authors (R. George, Trans.). University of California Press. (Original work published 1837) BonJour, L. 1998. In defense of pure reason: A rationalist account of a priori justification. Cambridge University Press. Boolos, G. 1997. Is Hume’s Principle analytic? In R. G. Heck Jr. (Ed.), Language, thought, and logic: Essays in honour of Michael Dummett (pp. 245–262). Oxford University Press. Brandom, R. B. 1994. Making it explicit: Reasoning, representing, and discursive commitment. Harvard University Press. Brandom, R. B. 2000. Articulating reasons: An introduction to inferentialism. Harvard University Press. Brandom, R. B. 2002. Explanatory vs. expressive deflationism about truth. In R. Schantz (Ed.), What is truth? (pp. 103–119). Walter de Gruyter. Burgess, J. P. 2008. Dummett’s case for intuitionism. In Mathematics, models, and modality (pp. 256–276). Cambridge University Press. (Reprinted from History and Philosophy of Logic, 1984, 5, 177–194) Carnap, R. 1937. Logical syntax of language (A. Smeaton, Trans.). Kegan Paul, Trench, Trubner & Co. (Original work published 1934) Carnap, R. 1943. Formalization of logic. Harvard University Press. Carnap, R. 1956. Empiricism, semantics, and ontology. In Meaning and necessity: A study in semantics and modal logic (2nd ed., pp. 205–221). University of Chicago Press. (Reprinted from Revue Internationale de Philosophie, 1950, 4, 20–40) Carnap, R. 1963. Intellectual autobiography. In P. A. Schilpp (Ed.), The philosophy of Rudolf Carnap (pp. 3–84). Open Court. Carnap, R. 1975. Observation language and theoretical language (H. G. Bohnert, Trans.). In J. Hin- tikka (Ed.), Rudolf Carnap, logical empiricist: Materials and perspectives (pp. 75–85). Springer. 164 (Original work published 1958) Carroll, L. 1895. What the tortoise said to Achilles. Mind, 4, 278–280. Chalmers, D. J. 2009. Ontological anti-realism. In D. Chalmers, D. Manley, and R. Wasserman (Eds.), Metametaphysics: New essays on the foundations of ontology (pp. 77–129). Oxford University Press. Chalmers, D. J. 2011. Revisability and conceptual change in “Two dogmas of empiricism”. Journal of Philosophy, 108, 387–415. Chalmers, D. J. 2012. Constructing the world. Oxford University Press. Clarke-Doane, J. 2016. What is the Benacerraf Problem? In F. Pataut (Ed.), Truth, objects, infinity: New perspectives on the philosophy of Paul Benacerraf (pp. 17–43). Springer. Correia, F., and Schnieder, B. 2012. Grounding: An opinionated introduction. In F. Correia and B. Schnieder (Eds.), Metaphysical grounding: Understanding the structure of reality (pp. 1–36). Cambridge University Press. Cozzo, C. 1994. Meaning and argument: A theory of meaning centred on immediate argumental role. Almqvist & Wiskell International. Dummett, M. 1973. Frege: Philosophy of language. Harper & Row. Dummett, M. 1978a. The justification of deduction. In Truth and other enigmas (pp. 290–318). Harvard University Press. (Reprinted from Proceedings of the British Academy, 1973, 59, 201– 231) Dummett, M. 1978b. The philosophical basis of intuitionistic logic. In Truth and other enigmas (pp. 215–247). Harvard University Press. (Reprinted from Logic colloquium ’73, pp. 5–40, by H. E. Rose and J. C. Shepherdson, Eds., 1975, North-Holland) Dummett, M. 1991. The logical basis of metaphysics. Harvard University Press. Dummett, M. 1993. Language and truth. In The seas of language (pp. 117–146). Oxford University Press. (Reprinted from Approaches to language, pp. 95–125, by R. Harris, Ed., 1983, Pergamon) Dummett, M. 2000. Elements of intuitionism (2nd ed.). Oxford University Press. Dummett, M. 2002. The two faces of the concept of truth. In R. Schantz (Ed.), What is truth? (pp. 249–262). Walter de Gruyter. Dummett, M. 2004. Truth and the past. Columbia University Press. 165 Einheuser, I. 2006. Counterconventional conditionals. Philosophical Studies, 127, 459–482. Eklund, M. 2010. Rejectionism about truth. In C. D. Wright and N. J. L. L. Pedersen (Eds.), New waves in truth (pp. 30–44). Palgrave Macmillan. Elder, C. L. 2006. Conventionalism and realism-imitating counterfactuals. Philosophical Quarterly, 56, 1–15. Enoch, D., and Schechter, J. 2008. How are basic belief-forming methods justified? Philosophy and Phenomenological Research, 76, 547–579. Ewing, A. C. 1940. The linguistic theory of a priori propositions. Proceedings of the Aristotelian Society, 40, 207–244. Field, H. H. 1980. Science without numbers: A defence of nominalism. Princeton University Press. Field, H. H. 1984. Review of Frege’s conception of numbers as objects. Canadian Journal of Philosophy, 14, 637–662. Field, H. H. 1989a. Introduction: Fictionalism, epistemology and modality. In Realism, mathematics and modality (pp. 1–52). Basil Blackwell. Field, H. H. 1989b. Realism, mathematics and modality. In Realism, mathematics and modality (pp. 227–281). Basil Blackwell. Field, H. H. 1994. Deflationist views of meaning and content. Mind, 103, 249–285. Field, H. H. 2000. Apriority as an evaluative notion. In P. A. Boghossian and C. Peacocke (Eds.), New essays on the a priori (pp. 117–149). Oxford University Press. Fine, K. 2001. The question of realism. Philosophers’ Imprint, 1, 1–30. Fine, K. 2012a. Guide to ground. In F. Correia and B. Schnieder (Eds.), Metaphysical grounding: Understanding the structure of reality (pp. 37–80). Cambridge University Press. Fine, K. 2012b. The pure logic of ground. Review of Symbolic Logic, 5, 1–25. Fodor, J. 2004. Having concepts: A brief refutation of the twentieth century. Mind & Language, 19, 29–47. García-Carpintero, M., and Pérez Otero, M. 2009. The conventional and the analytic. Philosophy and Phenomenological Research, 78, 239–274. Genzten, G. 1964. Investigations into logical deduction (M. E. Szabo, Trans.). American Philosophical Quarterly, 1, 288–306. (Original work published 1934) 166 Glock, H. 2003. The linguistic doctrine revisited. Grazer Philosophische Studien, 66, 143–170. Glock, H. 2008. Necessity and language: In defence of conventionalism. Philosophical Investigations, 31, 24–47. Glüer, K. 2003. Analyticity and implicit definition. Grazer Philosophische Studien, 66, 37–60. Gödel, K. 1983. What is Cantor’s continuum problem? In P. Benacerraf and H. Putnam (Eds.), Philos- ophy of mathematics: Selected readings (second ed., pp. 470–485). Cambridge University Press. (Reprinted from Philosophy of mathematics: Selected readings, pp. 258–273, by P. Benacerraf and H. Putnam, Eds., 1964, Prentice-Hall) Goldfarb, W. 1996. The philosophy of mathematics in early positivism. In R. N. Giere and A. W. Richardson (Eds.), Origins of logical empiricism (pp. 213–230). University of Minnesota Press. Goldman, A. I. 1979. What is justified belief? In G. S. Pappas (Ed.), Justification and knowledge: New studies in epistemology (pp. 1–23). D. Reidel. Grover, D. L., Camp, J. L., Jr., and Belnap, N. D., Jr. 1975. A prosentential theory of truth. Philosophical Studies, 27, 73–125. Haack, S. 1982. Dummett’s justification of deduction. Mind, 91, 216–239. Hale, B. 2002. The source of necessity. Philosophical Perspectives, 16, 299–319. Hale, B., and Wright, C. 2000. Implicit definition and the a priori. In P. Boghossian and C. Peacocke (Eds.), New essays on the a priori (pp. 192–205). Oxford University Press. Harman, G. 1986. Change in view: Principles of reasoning. MIT Press. Harman, G. 1996. Analyticity regained? Noûs, 30, 392–400. Harman, G. 1999. (Nonsolipsistic) conceptual role semantics. In Reasoning, meaning, and mind (pp. 206–231). Oxford University Press. (Reprinted from New directions in semantics, pp. 55–81, by E. Lepore, Ed., 1987, Academic Press) Hill, C. S. 2002. Thought and world: An austere portrayal of truth, reference, and semantic correspon- dence. Cambridge University Press. Hill, C. S. 2013. Concepts, teleology, and rational revision. In A. Casullo and J. C. Thurow (Eds.), The a priori in philosophy (pp. 134–157). Oxford University Press. Hill, C. S. 2014. A substitutional theory of truth, reference, and semantic correspondence. In Meaning, 167 mind, and knowledge (pp. 51–65). Oxford University Press. Horwich, P. 1997. Implicit definition, analytic truth, and apriori knowledge. Noûs, 31, 423–440. Horwich, P. 1998a. Meaning. Oxford University Press. Horwich, P. 1998b. Truth (second ed.). Oxford University Press. Horwich, P. 2005. Reflections on meaning. Oxford University Press. Horwich, P. 2008. Ungrounded reason. Journal of Philosophy, 105, 453–471. Horwich, P. 2013. Naturalism, deflationism and the relative priority of language and metaphysics. In Expressivism, pragmatism and representationalism (pp. 112–127). Cambridge University Press. Jenkins, C. S. 2008. Boghossian and epistemic analyticity. Croatian Journal of Philosophy, 8, 113–127. Jenkins, C. S., and Nolan, D. 2012. Disposition impossible. Noûs, 46, 732–753. Kant, I. 2001. Prolegomena to any future metaphysics that will be able to come forward as science (second ed.; J. W. Ellington, Trans.). Hackett Publishing. (Original work published 1783) Kneale, W. C. 1947. Are necessary truths true by convention? Proceedings of the Aristotelian Society, Supplementary Volume, 21, 118–133. Kukla, R., and Winsberg, E. 2015. Deflationism, pragmatism, and metaphysics. In S. Gross, N. Tebben, and M. Williams (Eds.), Meaning without representation: Essays on truth, expres- sion, normativity, and naturalism (pp. 25–46). Oxford University Press. Lewis, C. I. 1946. An analysis of knowledge and valuation. Open Court. Lewis, D. 1969. Convention: A philosophical study. Harvard University Press. Lewis, D. 1986. On the plurality of worlds. Basil Blackwell. Lewy, C. 1976. Meaning and modality. Cambridge University Press. Livingstone-Banks, J. 2016. The contingency problem for neo-conventionalism. Erkenntnis, 82, 653–671. Locke, J. 1690. An essay concerning humane understanding. Thomas Basset. Lynch, M. P. 2001. A functionalist theory of truth. In M. P. Lynch (Ed.), The nature of truth: Classic and contemporary perspectives (pp. 723–749). MIT Press. MacFarlane, J. 2010. Pragmatism and inferentialism. In B. Weiss and J. Wanderer (Eds.), Reading Brandom: On Making it explicit (pp. 81–95). Routledge. Makinson, D. C. 1965. The paradox of the preface. Analysis, 25, 205–207. 168 Malcolm, N. 1940. Are necessary propositions really verbal? Mind, 49, 189–203. Milne, P. 1994. Classical harmony: Rules of inference and the meaning of the logical constants. Synthese, 100, 49–94. Misak, C. 2007. Pragmatism and deflationism. In C. Misak (Ed.), New pragmatists (pp. 68–90). Oxford University Press. Murzi, J., and Steinberger, F. 2017. Inferentialism. In B. Hale, C. Wright, and A. Miller (Eds.), A companion to the philosophy of language (second ed., pp. 197–224). Wiley Blackwell. Nolan, D. 1997. Impossible worlds: A modest approach. Notre Dame Journal of Formal Logic, 38, 535–572. Nozick, R. 1981. Philosophical explanations. Harvard University Press. Pap, A. 1958. Semantics and necessary truth: An inquiry into the foundations of analytic philosophy. Yale University Press. Peacocke, C. 1987. Understanding logical constants: A realist’s account. Proceedings of the British Academy, 73, 153–200. Peacocke, C. 1992. A study of concepts. MIT Press. Peregrin, J. 2014. Inferentialism: Why rules matter. Palgrave Macmillan. Poincaré, H. 1905. Science and hypothesis (W. J. Greenstreet, Trans.). Walter Scott. (Original work published 1902) Prawitz, D. 1974. On the idea of a general proof theory. Synthese, 27, 63–77. Prawitz, D. 2006. Meaning approached via proofs. Synthese, 148, 507–524. Price, H. 2003. Truth as convenient friction. Journal of Philosophy, 100, 167–190. Price, H. 2013. Prospects for global expressivism. In Expressivism, pragmatism and representation- alism (pp. 147–194). Cambridge University Press. Prior, A. N. 1960. The runabout inference-ticket. Analysis, 21, 38–39. Prior, A. N. 1964. Conjunction and contonktion revisited. Analysis, 24, 191–195. Putnam, H. 1978. There is at least one a priori truth. Erkenntnis, 13, 153–170. Quine, W. V. O. 1936. Truth by convention. In O. H. Lee (Ed.), Philosophical essays for Alfred North Whitehead (pp. 90–124). Longmans, Green and Co. Quine, W. V. O. 1951. Two dogmas of empiricism. Philosophical Review, 60, 20–43. 169 Quine, W. V. O. 1960a. Carnap and logical truth. Synthese, 12, 350–374. Quine, W. V. O. 1960b. Word and object. MIT Press. Quine, W. V. O. 1968. Ontological relativity. Journal of Philosophy, 65, 185–212. Quine, W. V. O. 1970. Philosophy of logic. Prentice-Hall. Ramsey, F. P. 1927. Facts and propositions. Proceedings of the Aristotelian Society, Supplementary Volume, 7, 153–170. Ramsey, F. P. 1931. Theories. In R. B. Braithwaite (Ed.), The foundations of mathematics and other logical essays (pp. 212–236). Routledge & Kegan Paul Ltd. Restall, G. 2005. Multiple conclusions. In P. Hájek, L. Valdéz-Villanueva, and D. Westerståhl (Eds.), Logic, methodology and philosophy of science: Proceedings of the twelfth international conference (pp. 189–205). College Publications. Rosen, G. 2010. Metaphysical dependence: Grounding and reduction. In B. Hale and A. Hoffmann (Eds.), Modality: Metaphysics, logic, and epistemology (pp. 109–135). Oxford University Press. Rumfitt, I. 2000. “Yes” and “No”. Mind, 109, 781–823. Russell, G. 2008. Truth in virtue of meaning. Oxford University Press. Schaffer, J. 2009. On what grounds what. In D. Chalmers, D. Manley, and R. Wasserman (Eds.), Metametaphysics: New essays on the foundations of ontology (pp. 347–383). Oxford University Press. Schaffer, J. 2012. Grounding, transitivity, and contrastivity. In F. Correia and B. Schnieder (Eds.), Metaphysical grounding: Understanding the structure of reality (pp. 122–138). Cambridge Uni- versity Press. Schechter, J. 2010. The reliability challenge and the epistemology of logic. Philosophical Perspectives, 24, 437–464. Schechter, J. 2013a. Could evolution explain our reliability about logic? In T. S. Gendler and J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 4, pp. 214–239). Oxford University Press. Schechter, J. 2013b. Rational self-doubt and the failure of closure. Philosophical Studies, 163, 429– 452. Schechter, J. in press. Small steps and great leaps in thought: The epistemology of basic deductive 170 rules. In M. Balcerak Jackson and B. Balcerak Jackson (Eds.), Reasoning: Essays on theoretical and practical thinking. Oxford University Press. Schechter, J., and Enoch, D. 2006. Meaning and justification: The case of Modus Ponens. Noûs, 40, 687–715. Schiffer, S. 2003. The things we mean. Oxford University Press. Sellars, W. 1954. Some reflections on language games. Philosophy of Science, 21, 204–288. Sidelle, A. 1989. Necessity, essence, and individuation: A defense of conventionalism. Cornell University Press. Sidelle, A. 2009. Conventionalism and the contingency of conventions. Noûs, 43, 224–241. Sidelle, A. 2010. Modality and objects. Philosophical Quarterly, 60, 109–125. Sider, T. 2011. Writing the book of the world. Oxford University Press. Smiley, T. 1996. Rejection. Analysis, 56, 1–9. Sober, E. 2000. Quine’s two dogmas. Proceedings of the Aristotelian Society, Supplementary Volume, 74, 237–280. Steinberger, F. 2011. What harmony could and could not be. Australasian Journal of Philosophy, 89, 617–639. Stevenson, J. T. 1961. Roundabout the runabout inference-ticket. Analysis, 21, 124–128. Stroud, B. 1984. The significance of philosophical scepticism. Oxford University Press. Tarski, A. 1944. The semantic conception of truth and the foundations of semantics. Philosophy and Phenomenological Research, 23, 155–196. Tennant, N. 2005. Rule-circularity and the justification of deduction. Philosophical Quarterly, 55, 625–648. Thomasson, A. L. 2015. Ontology made easy. Oxford University Press. Topey, B. 2017. Quinean holism, analyticity, and diachronic rational norms. Synthese. Advance online publication. doi: 10.1007/s11229-017-1366-3 van Inwagen, P. 2016. The neo-Carnapians. Synthese. Advance online publication. doi: 10.1007/ s11229-016-1110-4 Wagner, S. 1981. Tonk. Notre Dame Journal of Formal Logic, 22, 289–300. Wansing, H. 2015. Prawitz, proofs, and meaning. In H. Wansing (Ed.), Dag Prawitz on proofs and 171 meaning (pp. 1–32). Springer. Warren, J. 2015a. Conventionalism, consistency, and consistency sentences. Synthese, 192, 1351– 1371. Warren, J. 2015b. The possibility of truth by convention. Philosophical Quarterly, 65, 84–93. Warren, J. 2015c. Talking with Tonkers. Philosophers’ Imprint, 15, 1–24. Warren, J. 2017a. Epistemology versus non-causal realism. Epistemology versus non-causal realism, 194, 1643–1662. Warren, J. 2017b. Revisiting Quine on truth by convention. Journal of Philosophical Logic, 46, 119– 139. Williams, M. 2002. On some critics of deflationism. In R. Schantz (Ed.), What is truth? (pp. 146–158). Walter de Gruyter. Williams, M. 2006. Realism: What’s left? In P. Greenough and M. P. Lynch (Eds.), Truth and realism (pp. 77–99). Oxford University Press. Williamson, T. 2003. Understanding and inference. Proceedings of the Aristotelian Society, Supple- mentary Volume, 77, 249–293. Williamson, T. 2007. The philosophy of philosophy. Blackwell. Wisdom, J. 1938. Metaphysics and verification (I.). Mind, 47, 452–498. Wittgenstein, L. 1922. Tractatus logico-philosophicus. Kegan Paul, Trench, Trubner & Co. Wittgenstein, L. 1958. Philosophical investigations (2nd ed.; G. E. M. Anscombe, Trans.). Blackwell. Wright, C. 1983. Frege’s conception of numbers as objects. Aberdeen University Press. Wright, C. 1985. In defence of the Conventional Wisdom. In I. Hacking (Ed.), Exercises in analysis: Essays by students of Casimir Lewy (pp. 171–197). Cambridge University Press. Wright, C. 1992. Truth and objectivity. Harvard University Press. Wright, C. 1999. Is Hume’s Principle analytic? Notre Dame Journal of Formal Logic, 40, 6–30. Wright, C. 2001. Minimalism, deflationism, pragmatism, pluralism. In M. P. Lynch (Ed.), The nature of truth: Classic and contemporary perspectives (pp. 751–787). MIT Press. Wright, C. 2004. Intuition, entitlement and the epistemology of logical laws. Dialectica, 58, 155–175. Wyatt, J. 2016. The many (yet few) faces of deflationism. Philosophical Quarterly, 66, 362–382. Yablo, S. 1992. Review of Necessity, essence, and individuation: A defense of conventionalism. Philo- 172 sophical Review, 101, 878–881. 173