In a range of contexts, one comes across processes resembling inference, but where input propositions are not in general included among outputs, and the operation is not in any way reversible. Examples arise in contexts of conditional obligations, goals, ideals, preferences, actions, and beliefs. Our purpose is to develop a theory of such input/output operations. Four are singled out: simple-minded, basic (making intelligent use of disjunctive inputs), simple-minded reusable (in which outputs may be recycled as inputs), and basic reusable. They are defined semantically and characterised by derivation rules, as well as in terms of relabeling procedures and modal operators. Their behaviour is studied on both semantic and syntactic levels.
Belief merging aims at combining several pieces of information coming from different sources. In this paper we review the works on belief merging of propositional bases. We discuss the relationship between merging, revision, update and confluence, and some links between belief merging and social choice theory. Finally we mention the main generalizations of these works in other logical frameworks.
In this paper we investigate a semantics for first-order logic originally proposed by R. van Rooij to account for the idea that vague predicates are tolerant, that is, for the principle that if x is P, then y should be P whenever y is similar enough to x. The semantics, which makes use of indifference relations to model similarity, rests on the interaction of three notions of truth: the classical notion, and two dual notions simultaneously defined in terms of it, which we call tolerant truth and strict truth. We characterize the space of consequence relations definable in terms of those and discuss the kind of solution this gives to the sorites paradox. We discuss some applications of the framework to the pragmatics and psycholinguistics of vague predicates, in particular regarding judgments about borderline cases.
The model of self-referential truth presented in this paper, named Revision-theoretic supervaluation, aims to incorporate the philosophical insights of Gupta and Belnap's Revision Theory of Truth into the formal framework of Kripkean fixed-point semantics. In Kripke-style theories the final set of grounded true sentences can be reached from below along a strictly increasing sequence of sets of grounded true sentences: in this sense, each stage of the construction can be viewed as an improvement on the previous ones. I want to do something similar replacing the Kripkean sets of grounded true sentences with revision-theoretic sets of stable true sentences. This can be done by defining a monotone operator through a variant of van Fraassen's supervaluation scheme which is simply based on -length iterations of the Tarskian operator. Clearly, all virtues of Kripke-style theories are preserved, and we can also prove that the resulting set of grounded true sentences shares some nice features with the sets of stable true sentences which are provided by the usual ways of formalising revision. What is expected is that a clearer philosophical content could be associated to this way of doing revision; hopefully, a content directly linked with the insights underlying finite revision processes.
Logical geometry systematically studies Aristotelian diagrams, such as the classical square of oppositions and its extensions. These investigations rely heavily on the use of bitstrings, which are compact combinatorial representations of formulas that allow us to quickly determine their Aristotelian relations. However, because of their general nature, bitstrings can be applied to a wide variety of topics in philosophical logic beyond those of logical geometry. Hence, the main aim of this paper is to present a systematic technique for assigning bitstrings to arbitrary finite fragments of formulas in arbitrary logical systems, and to study the logical and combinatorial properties of this technique. It is based on the partition of logical space that is induced by a given fragment, and sheds new light on a number of interesting issues, such as the logic-dependence of the Aristotelian relations and the subtle interplay between the Aristotelian and Boolean structure of logical fragments. Finally, the bitstring technique also allows us to systematically analyze fragments from contemporary logical systems, such as public announcement logic, which could not be done before.
Logicians have often suggested that the use of Euler-type diagrams has influenced the idea of the quantification of the predicate. This is mainly due to the fact that Euler-type diagrams display more information than is required in traditional syllogistics. The paper supports this argument and extends it by a further step: Euler-type diagrams not only illustrate the quantification of the predicate, but also solve problems of traditional proof theory, which prevented an overall quantification of the predicate. Thus, Euler-type diagrams can be called the natural basis of syllogistic reasoning and can even go beyond. In the paper, these arguments are presented in connection with the book Nucleus Logicae Weisaniae by Johann Christian Lange from 1712.
The paper introduces the formula structure of poly-sequents, allowing the expression of poly-positions: positions with any number of stances, of which bilateralism and trilateralism are special cases. The paper also puts forward the view that s-coherence (strong coherence) of such poly-positions can be defined inferentially, without appealing to their validity under interpretations of the object language.
This article introduces, studies, and applies a new system of logic which is called ‘HYPE’. In HYPE, formulas are evaluated at states that may exhibit truth value gaps (partiality) and truth value gluts (overdeterminedness). Simple and natural semantic rules for negation and the conditional operator are formulated based on an incompatibility relation and a partial fusion operation on states. The semantics is worked out in formal and philosophical detail, and a sound and complete axiomatization is provided both for the propositional and the predicate logic of the system. The propositional logic of HYPE is shown to contain first-degree entailment, to have the Finite Model Property, to be decidable, to have the Disjunction Property, and to extend intuitionistic propositional logic conservatively when intuitionistic negation is defined appropriately by HYPE’s logical connectives. Furthermore, HYPE’s first-order logic is a conservative extension of intuitionistic logic with the Constant Domain Axiom, when intuitionistic negation is again defined appropriately. The system allows for simple model constructions and intuitive Euler-Venn-like diagrams, and its logical structure matches structures well-known from ordinary mathematics, such as from optimization theory, combinatorics, and graph theory. HYPE may also be used as a general logical framework in which different systems of logic can be studied, compared, and combined. In particular, HYPE is found to relate in interesting ways to classical logic and various systems of relevance and paraconsistent logic, many-valued logic, and truthmaker semantics. On the philosophical side, if used as a logic for theories of type-free truth, HYPE is shown to address semantic paradoxes such as the Liar Paradox by extending non-classical fixed-point interpretations of truth by a conditional as well-behaved as that of intuitionistic logic. Finally, HYPE may be used as a background system for modal operators that create hyperintensional contexts, though the details of this application need to be left to follow-up work.
In their recent article “A Hierarchy of Classical and Paraconsistent Logics”, Eduardo Barrio, Federico Pailos and Damien Szmuc (BPS hereafter) present novel and striking results about meta-inferential validity in various three valued logics. In the process, they have thrown open the door to a hitherto unrecognized domain of non-classical logics with surprising intrinsic properties, as well as subtle and interesting relations to various familiar logics, including classical logic. One such result is that, for each natural number n, there is a logic which agrees with classical logic on tautologies, inferences, meta-inferences, meta-meta-inferences, meta-meta-...(n - 3 times)-meta-inferences, but that disagrees with classical logic on n + 1-meta-inferences. They suggest that this shows that classical logic can only be characterized by defining its valid inferences at all orders. In this article, I invoke some simple symmetric generalizations of BPS’s results to show that the problem is worse than they suggest, since in fact there are logics that agree with classical logic on inferential validity to all orders but still intuitively differ from it. I then discuss the relevance of these results for truth theory and the classification problem.
Free Choice is the principle that possibly p or q implies and is implied by possibly p and possibly q. A variety of recent attempts to validate Free Choice rely on a nonclassical semantics for disjunction, where the meaning of p or q is not a set of possible worlds. This paper begins with a battery of impossibility results, showing that some kind of nonclassical semantics for disjunction is required in order to validate Free Choice. The paper then provides a positive account of Free Choice, by identifying a family of dynamic semantics for disjunction that can validate the inference. On all such theories, the meaning of p or q has two parts. First, p or q requires that our information is consistent with each of p and q. Second, p or q narrows down our information by eliminating some worlds. It turns out that this second component of or is well behaved: there is a strongest such meaning that p or q can express, consistent with validating Free Choice. The strongest such meaning is the classical one, on which p or q eliminates any world where both p and q are false. In this way, the classical meaning of disjunction turns out to be intimately related to the validity of Free Choice.
In this paper, we develop a formalisation of the main ideas of the work of Van de Poel on responsibility. Using the basic concepts through which the meanings of responsibility are defined, we construct a logic which enables to express sentences like “individual i is accountable for φ”, “individual i is blameworthy for φ” and “individual i has the obligation to see to it that φ”. This formalization clarifies the definitions of responsibility given by Van de Poel and highlights their differences and similarities. It also helps to assess the consistency of the formalisation of responsibility, not only by showing that definitions are not inconsistent, but also by providing a formal demonstration of the relation between three main meanings of responsibility (accountability, blameworthiness, and obligation). The formal account can be used to derive new properties of the concepts. With the help of the formalisation, we detect the occurrence of the problem of many hands (PMH) by defining a logical framework for reasoning about collective and individual responsibility. This logic extends the Coalition Epistemic Dynamic Logic (CEDL) by adding a notion of group knowledge (and generalize the definitions of individual responsibility to groups of agents), agent ability and knowing how to its semantics.
Fitelson and McCarthy (2014) have proposed an accuracy measure for confidence orders which favors probability measures and Dempster-Shafer belief functions as accounts of degrees of belief and excludes ranking functions. Their accuracy measure only penalizes mistakes in confidence comparisons. We propose an alternative accuracy measure that also rewards correct confidence comparisons. Thus we conform to both of William James’ maxims: “Believe truth! Shun error!” We combine the two maxims, penalties and rewards, into one criterion that we call prioritized accuracy optimization (PAO). That is, PAO punishes wrong comparisons (preferring the false to the true) and rewards right comparisons (preferring the true to the false). And it requires to prioritize being right und avoiding to be wrong in a specific way. Thus PAO is both, a scoring rule and a decision rule. It turns out that precisely confidence orders representable by two-sided ranking functions satisfy PAO. The point is not to argue that PAO is the better accuracy goal. The point is only that ranking theory can also be supported by accuracy considerations. Thus, those considerations by themselves cannot decide about rational formats for degrees of belief, but are part and parcel of an overall normative assessment of those formats.
Conditional logics have traditionally been intended to formalize various intuitively correct modes of reasoning involving (counterfactual) conditional expressions in natural language. Although conditional logics have by now been thoroughly studied in a classical context, they have yet to be systematically examined in an intuitionistic context, despite compelling philosophical and technical reasons to do so. This paper addresses this gap by thoroughly examining the basic intuitionistic conditional logic ICK, the intuitionistic counterpart of Chellas' important classical system CK. I give ICK both worlds semantics and algebraic semantics, and prove that these are equivalent. I give a Godel-type embedding of ICK into CK (augmented with an S4 box connective) and a Glivenko-type embedding of CK into ICK. I axiomatize ICK and prove soundness, completeness, and decidability results. Finally, I discuss extending ICK.
We identify a pervasive contrast between implicit and explicit stances in logical analysis and system design. Implicit systems change received meanings of logical constants and sometimes also the notion of consequence, while explicit systems conservatively extend classical systems with new vocabulary. We illustrate the contrast for intuitionistic and epistemic logic, then take it further to information dynamics, default reasoning, and other areas, to show its wide scope. This gives a working understanding of the contrast, though we stop short of a formal definition, and acknowledge limitations and borderline cases. Throughout we show how awareness of the two stances suggests new logical systems and new issues about translations between implicit and explicit systems, linking up with foundational concerns about identity of logical systems. But we also show how a practical facility with these complementary working styles has philosophical consequences, as it throws doubt on strong philosophical claims made by just taking one design stance and ignoring alternative ones. We will illustrate the latter benefit for the case of logical pluralism and hyper-intensional semantics.
Girard introduced phase semantics as a complete set-theoretic semantics of linear logic, and Okada modified phase-semantic completeness proofs to obtain normal-form theorems. On the basis of these works, Okada and Takemura reformulated Girard's phase semantics so that it became phase semantics for proof-terms, i.e., lambda-terms. They formulated phase semantics for proof-terms of Laird's dual affine/intuitionistic lambda-calculus and proved the normal-form theorem for Laird's calculus via a completeness theorem. Their semantics was obtained by an application of computability predicates. In this paper, we first formulate phase semantics for proof-terms of second-order intuitionistic propositional logic by modifying Tait-Girard's saturated sets method. Next, we prove the completeness theorem with respect to this semantics, which implies a strong normalization theorem.
There are various well-known paradoxes of modal recombination. This paper offers a solution to a variety of such paradoxes in the form of a new conception of metaphysical modality. On the proposed conception, metaphysical modality exhibits a type of indefinite extensibility. Indeed, for any objective modality there will always be some further, broader objective modality; in other terms, modal space will always be open to expansion.
The traditional possible-worlds model of belief describes agents as logically omniscient' in the sense that they believe all logical consequences of what they believe, including all logical truths. This is widely considered a problem if we want to reason about the epistemic lives of non-ideal agents whomuch like ordinary human beingsare logically competent, but not logically omniscient. A popular strategy for avoiding logical omniscience centers around the use of impossible worlds: worlds that, in one way or another, violate the laws of logic. In this paper, we argue that existing impossible-worlds models of belief fail to describe agents who are both logically non-omniscient and logically competent. To model such agents, we argue, we need to dynamize' the impossible-worlds framework in a way that allows us to capture not only what agents believe, but also what they are able to infer from what they believe. In light of this diagnosis, we go on to develop the formal details of a dynamic impossible-worlds framework, and show that it successfully models agents who are both logically non-omniscient and logically competent.
Serious actualism is the prima facie plausible thesis that things couldn't have been related while being nothing. The thesis plays an important role in a number of arguments in metaphysics, e.g., in Plantinga's argument (Plantinga Philosophical Studies, 44, 1-20 1983) for the claim that propositions do not ontologically depend on the things that they are about and in Williamson's argument (Williamson 2002) for the claim that he, Williamson, is necessarily something. Salmon (Philosophical Perspectives, 1, 49-108 1987) has put forward that which is, arguably, the most pressing challenge to serious actualists. Salmon's objection is based on a scenario intended to elicit the judgment that merely possible entities may nonetheless be actually referred to, and so may actually have properties. It is shown that predicativism, the thesis that names are true of their bearers, provides the resources for replying to Salmon's objection. In addition, an argument for serious actualism based on Stephanou (Philosophical Review, 116(2), 219-250 2007) is offered. Finally, it is shown that once serious actualism is conjoined with some minimal assumptions, it implies property necessitism, the thesis that necessarily all properties are necessarily something, as well as a strong comprehension principle for higher-order modal logic according to which for every condition there necessarily is the property of being a thing satisfying that condition.
Several writers have assumed that when in Outline of a Theory of Truth I wrote that the orthodox approach - that is, Tarski's account of the truth definition - admits descending chains, I was relying on a simple compactness theorem argument, and that non-standard models must result. However, I was actually relying on a paper on pseudo-well-orderings' by Harrison (Transactions of the American Mathematical Society, 131, 527-543 1968). The descending hierarchy of languages I define is a standard model. Yablo's Paradox later emerged as a key to interpreting the result.