We introduce an atomic formula $\overrightarrow{\mathrm{y}}{\perp }_{\overrightarrow{\mathrm{x}}}\overrightarrow{\mathrm{z}}$ intuitively saying that the variables $\overrightarrow{\mathrm{y}}$ are independent from the variables $\overrightarrow{\mathrm{z}}$ if the variables $\overrightarrow{\mathrm{x}}$ are kept constant. We contrast this with dependence logic 𝓓 based on the atomic formula $=(\overrightarrow{\mathrm{x}},\overrightarrow{\mathrm{y}})$ , actually equivalent to $\overrightarrow{\mathrm{y}}{\perp }_{\overrightarrow{\mathrm{x}}}\overrightarrow{\mathrm{y}}$ , saying that the variables $\overrightarrow{\mathrm{y}}$ are totally determined by the variables $\overrightarrow{\mathrm{x}}$ . We show that $\overrightarrow{\mathrm{y}}{\perp }_{\overrightarrow{\mathrm{x}}}\overrightarrow{\mathrm{z}}$ gives rise to a natural logic capable of formalizing basic intuitions about independence and dependence. We show that $\overrightarrow{\mathrm{y}}{\perp }_{\overrightarrow{\mathrm{x}}}\overrightarrow{\mathrm{z}}$ can be used to give partially ordered quantifiers and IF-logic an alternative interpretation without some of the shortcomings related to so called signaling that interpretations using $=(\overrightarrow{\mathrm{x}},\overrightarrow{\mathrm{y}})$ have.

In the current paper, we re-examine how abstract argumentation can be formulated in terms of labellings, and how the resulting theory can be applied in the field of modal logic. In particular, we are able to express the (complete) extensions of an argumentation framework as models of a set of modal logic formulas that represents the argumentation framework. Using this approach, it becomes possible to define the grounded extension in terms of modal logic entailment.

We take a logical approach to threshold models, used to study the diffusion of opinions, new technologies, infections, or behaviors in social networks. Threshold models consist of a network graph of agents connected by a social relationship and a threshold value which regulates the diffusion process. Agents adopt a new behavior/product/opinion when the proportion of their neighbors who have already adopted it meets the threshold. Under this diffusion policy, threshold models develop dynamically towards a guaranteed fixed point. We construct a minimal dynamic propositional logic to describe the threshold dynamics and show that the logic is sound and complete. We then extend this framework with an epistemic dimension and investigate how information about more distant neighbors’ behavior allows agents to anticipate changes in behavior of their closer neighbors. Overall, our logical formalism captures the interplay between the epistemic and social dimensions in social networks.

Occam’s razor directs us to adopt the simplest hypothesis consistent with the evidence. Learning theory provides a precise definition of the inductive simplicity of a hypothesis for a given learning problem. This definition specifies a learning method that implements an inductive version of Occam’s razor. As a case study, we apply Occam’s inductive razor to causal learning. We consider two causal learning problems: learning a causal graph structure that presents global causal connections among a set of domain variables, and learning context-sensitive causal relationships that hold not globally, but only relative to a context. For causal graph learning, Occam’s inductive razor directs us to adopt the model that explains the observed correlations with a minimum number of direct causal connections. For expanding a causal graph structure to include context-sensitive relationships, Occam’s inductive razor directs us to adopt the expansion that explains the observed correlations with a minimum number of free parameters. This is equivalent to explaining the correlations with a minimum number of probabilistic logical rules. The paper provides a gentle introduction to the learning-theoretic definition of inductive simplicity and the application of Occam’s razor for causal learning.

Utilizing an idea that has its first appearance in Gerhard Gentzen’s unpublished manuscripts, we generate an exhaustive repertoire of all the possible inference rules that are related to the left implication inference rule of the sequent calculus from a ground sequent, that is, a logical axiom. We discuss the similarities and differences of these derived rules as well as their interaction with the implication right rule under cut and the structural axiom. We further consider the question of analyticity of cuts in calculi using one of the new rules instead of the standard left implication rule.

We propose a new perspective on logics of computation by combining instantial neighborhood logic $$\mathsf {INL}$$ INL with bisimulation safe operations adapted from $$\mathsf {PDL}$$ PDL . $$\mathsf {INL}$$ INL is a recent modal logic, based on an extended neighborhood semantics which permits quantification over individual neighborhoods plus their contents. This system has a natural interpretation as a logic of computation in open systems. Motivated by this interpretation, we show that a number of familiar program constructors can be adapted to instantial neighborhood semantics to preserve invariance for instantial neighborhood bisimulations, the appropriate bisimulation concept for $$\mathsf {INL}$$ INL . We also prove that our extended logic $$\mathsf {IPDL}$$ IPDL is a conservative extension of dual-free game logic, and its semantics generalizes the monotone neighborhood semantics of game logic. Finally, we provide a sound and complete system of axioms for $$\mathsf {IPDL}$$ IPDL , and establish its finite model property and decidability.

We prove that the positive fragment of first-order intuitionistic logic in the language with two individual variables and a single monadic predicate letter, without functional symbols, constants, and equality, is undecidable. This holds true regardless of whether we consider semantics with expanding or constant domains. We then generalise this result to intervals $$[\mathbf{QBL}, \mathbf{QKC}]$$ [ QBL , QKC ] and $$[\mathbf{QBL}, \mathbf{QFL}]$$ [ QBL , QFL ] , where QKC is the logic of the weak law of the excluded middle and QBL and QFL are first-order counterparts of Visser’s basic and formal logics, respectively. We also show that, for most “natural” first-order modal logics, the two-variable fragment with a single monadic predicate letter, without functional symbols, constants, and equality, is undecidable, regardless of whether we consider semantics with expanding or constant domains. These include all sublogics of QKTB, QGL, and QGrz—among them, QK, QT, QKB, QD, QK4, and QS4.

In the previous paper with a similar title (see Shtakser in Stud Log 106(2):311–344, 2018), we presented a family of propositional epistemic logics whose languages are extended by two ingredients: (a) by quantification over modal (epistemic) operators or over agents of knowledge and (b) by predicate symbols that take modal (epistemic) operators (or agents) as arguments. We denoted this family by $${\mathcal {P}\mathcal {E}\mathcal {L}}_{(QK)}$$ P E L ( Q K ) . The family $${\mathcal {P}\mathcal {E}\mathcal {L}}_{(QK)}$$ P E L ( Q K ) is defined on the basis of a decidable higher-order generalization of the loosely guarded fragment (HO-LGF) of first-order logic. And since HO-LGF is decidable, we obtain the decidability of logics of $${\mathcal {P}\mathcal {E}\mathcal {L}}_{(QK)}$$ P E L ( Q K ) . In this paper we construct an alternative family of decidable propositional epistemic logics whose languages include ingredients (a) and (b). Denote this family by $${\mathcal {P}\mathcal {E}\mathcal {L}}^{alt}_{(QK)}$$ P E L ( Q K ) alt . Now we will use another decidable fragment of first-order logic: the two variable fragment of first-order logic with two equivalence relations (FO $$^2$$ 2 +2E) [the decidability of FO $$^2$$ 2 +2E was proved in Kieroński and Otto (J Symb Log 77(3):729–765, 2012)]. The families $${\mathcal {P}\mathcal {E}\mathcal {L}}^{alt}_{(QK)}$$ P E L ( Q K ) alt and $${\mathcal {P}\mathcal {E}\mathcal {L}}_{(QK)}$$ P E L ( Q K ) differ in the expressive power. In particular, we exhibit classes of epistemic sentences considered in works on first-order modal logic demonstrating this difference.

In the second installment to Gruszczyński and Pietruszczak (Stud Log, 2018. https://doi.org/10.1007/s11225-018-9786-8 ) we carry out an analysis of spaces of points of Grzegorczyk structures. At the outset we introduce notions of a concentric and $$\omega $$ ω -concentric topological space and we recollect some facts proven in the first part which are important for the sequel. Theorem 2.9 is a strengthening of Theorem 5.13, as we obtain stronger conclusion weakening Tychonoff separation axiom to mere regularity. This leads to a stronger version of Theorem 6.10 (in form of Corollary 2.10). Further, we show that Grzegorczyk points are maximal contracting filters in the sense of De Vries (Compact spaces and compactifications, Van Gorcum and Comp. N.V., 1962), but the converse inclusion is not necessarily true. We also compare the notions of a Grzegorczyk point and an ultrafilter, and establish several properties of topological spaces based on Grzegorczyk structures. The main results of the paper are representation and completion theorems for G-structures. We prove both set-theoretical and topological representation theorems for various classes of G-structures. We also present topological object duality theorem for the class of complete G-structures and the class of concentric spaces, both restricted to structures which satisfy countable chain condition. We conclude the paper with proving equivalence of the original Grzegorczyk axiom with the one accepted by us as axiom (G).

We study the learning power of iterated belief revision methods. Successful learning is understood as convergence to correct, i.e., true, beliefs. We focus on the issue of universality: whether or not a particular belief revision method is able to learn everything that in principle is learnable. We provide a general framework for interpreting belief revision policies as learning methods. We focus on three popular cases: conditioning, lexicographic revision, and minimal revision. Our main result is that conditioning and lexicographic revision can drive a universal learning mechanism, provided that the observations include only and all true data, and provided that a non-standard, i.e., non-well-founded prior plausibility relation is allowed. We show that a standard, i.e., well-founded belief revision setting is in general too narrow to guarantee universality of any learning method based on belief revision. We also show that minimal revision is not universal. Finally, we consider situations in which observational errors (false observations) may occur. Given a fairness condition, which says that only finitely many errors occur, and that every error is eventually corrected, we show that lexicographic revision is still universal in this setting, while the other two methods are not.

(I) Synchronic norms of theory choice, a traditional concern in scientific methodology, restrict the theories one can choose in light of given information. (II) Diachronic norms of theory change, as studied in belief revision, restrict how one should change one’s current beliefs in light of new information. (III) Learning norms concern how best to arrive at true beliefs. In this paper, we undertake to forge some rigorous logical relations between the three topics. Concerning (III), we explicate inductive truth conduciveness in terms of optimally direct convergence to the truth, where optimal directness is explicated in terms of reversals and cycles of opinion prior to convergence. Concerning (I), we explicate Ockham’s razor and related principles of choice in terms of the information topology of the empirical problem context and show that the principles are necessary for reversal or cycle optimal convergence to the truth. Concerning (II), we weaken the standard principles of agm belief revision theory in intuitive ways that are also necessary (and in some cases, sufficient) for reversal or cycle optimal convergence. Then we show that some of our weakened principles of change entail corresponding principles of choice, completing the triangle of relations between (I), (II), and (III).

Traditionally, belief change is modelled as the construction of a belief set that satisfies a success condition. The success condition is usually that a specified sentence should be believed (revision) or not believed (contraction). Furthermore, most models of belief change employ a select-and-intersect strategy. This means that a selection is made among primary objects that satisfy the success condition, and the intersection of the selected objects is taken as outcome of the operation. However, the select-and-intersect method is difficult to justify, in particular since the primary objects (usually possible worlds or remainders) are not themselves plausible outcome candidates. Some of the most controversial features of belief change theory, such as recovery and the impossibility of Ramsey test conditionals, are closely connected with the select-and-intersect method. It is proposed that a selection mechanism should instead operate directly on the potential outcomes, and select only one of them. In this way many of the problems that are associated with the select-and-intersect method can be avoided. This model is simpler than previous models in the important Ockhamist sense of doing away with intermediate, cognitively inaccessible objects. However, the role of simplicity as a choice criterion in the direct selection among potential outcomes is left as an open issue.

This paper describes an approach for reasoning in a dynamic domain with nondeterministic actions in which an agent’s (categorical) beliefs correspond to the simplest, or most plausible, course of events consistent with the agent’s observations and beliefs. The account is based on an epistemic extension of the situation calculus, a first-order theory of reasoning about action that accommodates sensing actions. In particular, the account is based on a qualitative theory of nondeterminism. Our position is that for commonsense reasoning, the world is most usefully regarded as deterministic, and that nondeterminism is an epistemic phenomenon, arising from an agent’s limited awareness and perception. The account offers several advantages: an agent has a set of categorical (as opposed to probabilistic) beliefs, yet can deal with equally-likely outcomes (such as in flipping a fair coin) or with outcomes of differing plausibility (such as an action that on rare occasions may fail). The agent maintains as its set of contingent beliefs the most plausible, or simplest, picture of the world, consistent with its beliefs and actions it believes it executed; yet it may modify these in light of later information.

Parametric logic is a framework that generalises classical first-order logic. A generalised notion of logical consequence—a form of preferential entailment based on a closed world assumption—is defined as a function of some parameters. A concept of possible knowledge base—the counterpart to the consistent theories of first-order logic—is introduced. The notion of compactness is weakened. The degree of weakening is quantified by a nonnull ordinal—the larger the ordinal, the more significant the weakening. For every possible knowledge base T, a hierarchy of sentences that are generalised logical consequences of T is built. The first layer of the hierarchies corresponds to sentences that can be obtained by a deductive inference, characterised by the compactness property. The second layer of the hierarchies corresponds to sentences that can be obtained by an inductive inference, characterised by the property of weak compactness quantified by 1. Weaker forms of compactness—quantified by nonnull ordinals—determine higher layers in the hierarchies, corresponding to more complex inferences. The naturalness of the hierarchies built over the possible knowledge bases is attested by fundamental connections with notions from Learning theory and from topology. The naturalness of the hierarchies built over the possible knowledge bases is attested by fundamental connections with notions from Learning theory—classification in the limit, with or without a bounded number of mind changes—and from topology—in reference to the Borel and the difference hierarchies. In this paper, we introduce the key model-theoretic aspects of Parametric logic, justify the concept of the knowledge base, define the hierarchies of generalised logical consequences and illustrate their relevance to Nonmonotonic reasoning. More specifically, we show that the degree of nonmonotonicity that is required to infer a sentence can be characterised by the least nonnull ordinal that quantifies the weakening of compactness used to locate the inferred sentence in the hierarchies.

Occam's razor directs us to adopt the simplest hypothesis consistent with the evidence. Learning theory provides a precise definition of the inductive simplicity of a hypothesis for a given learning problem. This definition specifies a learning method that implements an inductive version of Occam's razor. As a case study, we apply Occam's inductive razor to causal learning. We consider two causal learning problems: learning a causal graph structure that presents global causal connections among a set of domain variables, and learning context-sensitive causal relationships that hold not globally, but only relative to a context. For causal graph learning, Occam's inductive razor directs us to adopt the model that explains the observed correlations with a minimum number of direct causal connections. For expanding a causal graph structure to include context-sensitive relationships, Occam's inductive razor directs us to adopt the expansion that explains the observed correlations with a minimum number of free parameters. This is equivalent to explaining the correlations with a minimum number of probabilistic logical rules. The paper provides a gentle introduction to the learning-theoretic definition of inductive simplicity and the application of Occam's razor for causal learning.

We establish a duality between the category of involutive bisemilattices and the category of semilattice inverse systems of Stone spaces, using Stone duality from one side and the representation of involutive bisemilattices as Płonka sum of Boolean algebras, from the other. Furthermore, we show that the dual space of an involutive bisemilattice can be viewed as a GR space with involution, a generalization of the spaces introduced by Gierz and Romanowska equipped with an involution as additional operation.

We follow the ideas given by Chen and Grätzer to represent Stone algebras and adapt them for the case of Stonean residuated lattices. Given a Stonean residuated lattice, we consider the triple formed by its Boolean skeleton, its algebra of dense elements and a connecting map. We define a category whose objects are these triples and suitably defined morphisms, and prove that we have a categorical equivalence between this category and that of Stonean residuated lattices. We compare our results with other works and show some applications of the equivalence.