Can there be grounding without necessitation? Can a fact obtain wholly in virtue of metaphysically more fundamental facts, even though there are possible worlds at which the latter facts obtain but not the former? It is an orthodoxy in recent literature about the nature of grounding, and in first-order philosophical disputes about what grounds what, that the answer is no. I will argue that the correct answer is yes. I present two novel arguments against grounding necessitarianism, and show that grounding contingentism is fully compatible with the various explanatory roles that grounding is widely thought to play.
Philosophers have suggested that, in order to understand the particular visual state we are in during picture perception, we should focus on experimental results from vision neuroscience—in particular, on the most rigorous account of the functioning of the visual system that we have from vision neuroscience, namely, the ‘Two Visual Systems Model’. According to the initial version of this model, our visual system can be dissociated, from an anatomo-functional point of view, into two streams: a ventral stream subserving visual recognition, and a dorsal stream subserving the visual guidance of action. Following this model, philosophers have suggested that, since the two streams have different functions, they represent different properties of a picture. However, the original view proposed by the ‘Two Visual Systems Model’ about the presence of a strong anatomo-functional dissociation between the two streams has recently been questioned on both philosophical and experimental grounds. Indeed, the analysis of several new pieces of evidence seems to suggest that many visual representations in our visual system, related to different tasks, are the result of a deep functional interaction between the streams. In the light of the renewed status of the ‘Two Visual Systems Model’, also our best philosophical model of picture perception should be renewed, in order to take into account a view of the process of picture perception informed by the new evidence about such interaction. Despite this, no account fulfilling this role has been offered yet. The aim of the present paper is precisely to offer such an account. It does this by suggesting that the peculiar visual state we are in during picture perception is subserved by interstream interaction. This proposal allows us to rely on a rigorous philosophical account of picture perception that is, however, also based on the most recent results from neuroscience. Unless the explanation offered in this paper is endorsed, all the recent evidence from vision neuroscience will remain unexplained under our best empirically informed philosophical theory of picture perception.
The Integrated Information Theory (IIT) is a leading scientific theory of consciousness, which implies a kind of panpsychism. In this paper, I consider whether IIT is compatible with a particular kind of panpsychism, known as Russellian panpsychism, which purports to avoid the main problems of both physicalism and dualism. I will first show that if IIT were compatible with Russellian panpsychism, it would contribute to solving Russellian panpsychism’s combination problem, which threatens to show that the view does not avoid the main problems of physicalism and dualism after all. I then show that the theories are not compatible as they currently stand, in view of what I call the coarse-graining problem. After I explain the coarse-graining problem, I will offer two possible solutions, each involving a small modification of IIT. Given either of these modifications, IIT and Russellian panpsychism may be fully compatible after all, and jointly enable significant progress on the mind–body problem.
I argue that a criterion of theoretical equivalence due to Glymour (Noûs 11(3):227–251, 1977) does not capture an important sense in which two theories may be equivalent. I then motivate and state an alternative criterion that does capture the sense of equivalence I have in mind. The principal claim of the paper is that relative to this second criterion, the answer to the question posed in the title is “yes”, at least on one natural understanding of Newtonian gravitation.
Words are indispensable linguistic tools for beings like us. However, there is not much philosophical work done about what words really are. In this paper, I develop a new ontology for words. I argue that (a) words are abstract artifacts that are created to fulfill various kinds of purposes, and (b) words are abstract in the sense that they are not located in space but they have a beginning and may have an end in time given that certain conditions are met. What follows from this two-fold argument is that words, from an ontological point of view, are more like musical works, fictional characters or computer programs, than numbers or sets.
The well-known formal semantics of conditionals due to Stalnaker (in: Rescher (ed) Studies in logical theory, Blackwell, Oxford, 1968), Lewis (Counterfactuals, Blackwell, Oxford, 1973a), and Gärdenfors (in: Niiniluoto, Tuomela (eds) The logic and 1140 epistemology of scientific change, North-Holland, Amsterdam, 1978, Knowledge in flux, MIT Press, Cambridge, 1988) all fail to distinguish between trivially and nontrivially true indicative conditionals. This problem has been addressed by Rott (Erkenntnis 25(3):345–370, 1986) in terms of a strengthened Ramsey Test. In this paper, we refine Rott’s strengthened Ramsey Test and the corresponding analysis of explanatory relations. We show that our final analysis captures the presumed asymmetry between explanans and explanandum much better than Rott’s original analysis.
I show why old and new claims on the role of counterfactual reasoning for the EPR argument and Bell’s theorem are unjustified: once the logical relation between locality and counterfactual reasoning is clarified, the use of the latter does no harm and the nonlocality result can well follow from the EPR premises. To show why, after emphasizing the role of incompleteness arguments that Einstein developed before the EPR paper, I critically review more recent claims that equate the use of counterfactual reasoning with the assumption of a strong form of realism and argue that such claims are untenable.
It is often thought that metaphysical grounding underwrites a distinctive sort of metaphysical explanation. However, it would be a mistake to think that all metaphysical explanations are underwritten by metaphysical grounding. In service of this claim, I offer a novel kind of metaphysical explanation called metaphysical explanation by constraint, examples of which have been neglected in the literature. I argue that metaphysical explanations by constraint are not well understood as grounding explanations.
According to active inference (which subsumes the framework of predictive processing), action is enabled by a top-down modulation of sensory signals. Computational models of this mechanism complement ideomotor theories of action representation. Such theories postulate common neural representations for action and perception, without specifying how action is enabled by such representations. In active inference, motor commands are replaced by proprioceptive predictions. In order to initiate action through such predictions, sensory prediction errors have to be attenuated. This paper argues that such top-down modulation involves systematic (but paradoxically beneficial) misrepresentations. More specifically, the paper first argues for the following conditional claim. If active inference provides an accurate computational description of how action is enabled in the brain, then action is enabled by systematic misrepresentations. Furthermore, it is argued that an inference to the best explanation provides reason for believing the antecedent is true: Firstly, active inference provides a crucial extension to ideomotor theories. Secondly, active inference explains otherwise puzzling phenomena related to sensory attenuation, e.g. in force-matching or self-tickling paradigms. Taken together, these reasons support the claim that action is indeed enabled by systematic misrepresentations. The claim casts doubt on the assumption that representations are systematically beneficial to the extent that they are true: if the argument in this paper is sound, systematically beneficial misrepresentations may lie at the heart of our neural architecture.
Various claims regarding intertheoretic reduction, weak and strong notions of emergence, and explanatory fictions have been made in the context of first-order thermodynamic phase transitions. By appealing to John Norton's recent distinction between approximation and idealization, I argue that the case study of anyons and fractional statistics, which has received little attention in the philosophy of science literature, is more hospitable to such claims. In doing so, I also identify three novel roles that explanatory fictions fulfill in science. Furthermore, I scrutinize the claim that anyons, as they are ostensibly manifested in the fractional quantum Hall effect, are emergent entities and urge caution. Consequently, it is suggested that a particular notion of strong emergence signals the need for the development of novel physical-mathematical research programs.
Proponents of evolutionary debunking arguments aim to show that certain genealogical explanations of our moral faculties, if true, undermine our claim to moral knowledge. Criticisms of these arguments generally take the debunker’s genealogical explanation for granted. The task of the anti-debunker is thought to be that of reconciling the (supposed) truth of this hypothesis with moral knowledge. In this paper, I shift the critical focus instead to the debunker’s empirical hypothesis and argue that the skeptical strength of an evolutionary debunking argument is dependent upon the evidence for that hypothesis—evidence which, upon further inspection, proves far from compelling. Following that, however, I suggest that the same considerations which spell trouble for the empirical hypotheses of traditional debunking arguments can also be taken to give rise to an alternative—and better supported—style of debunking argument.
A significant part of contemporary social ontology has been focused on understanding forms of collective intentionality. It is suggested in this paper that the contested nature of some institutional matters makes this kind of approach problematic, and instead an alternative approach is developed, one that is oriented towards a micro-level analysis of the institutional constraints that we face in everyday life and which can make sense of how there can be institutional facts that are deeply contested and yet still real. The model is applied to two main examples, sexism and racism, and it is argued that on this approach it can make sense to understand both of them as institutions in our societies.
Some thoughts just come to mind together. This is usually thought to happen because they are connected by associations, which the mind follows. Such an explanation assumes that there is a particular kind of simple psychological process responsible. This view has encountered criticism recently. In response, this paper aims to characterize a general understanding of associative simplicity, which might support the distinction between associative processing and alternatives. I argue that there are two kinds of simplicity that are treated as characteristic of association, and as a result three possible versions of associative processing. This provides a framework that informs our understanding of association as a current and historical concept, including how various specific versions in different parts of psychology relate to one another. This framework can also guide debates over normative evaluations of actions produced by processes thought to be associative.
Titelbaum (in: Gendler T, Hawthorne J (eds) Oxford studies in epistemology, 2015) has recently argued that the Enkratic Principle is incompatible with the view that rational belief is sensitive to higher-order defeat. That is to say, if it cannot be rational to have akratic beliefs of the form “p, but I shouldn’t believe that p,” then rational beliefs cannot be defeated by higher-order evidence, which indicates that they are irrational. In this paper, I distinguish two ways of understanding Titelbaum’s argument, and argue that neither version is sound. The first version can be shown to rest on a subtle, but crucial, misconstrual of the Enkratic Principle. The second version can be resisted through careful consideration of cases of higher-order defeat. The upshot is that proponents of the Enkratic Principle are free to maintain that rational belief is sensitive to higher-order defeat.
The recent literature on the epistemology of disagreement focuses on the rational response question: how are you rationally required to respond to a doxastic disagreement with someone, especially with someone you take to be your epistemic peer? A doxastic disagreement with someone also confronts you with a slightly different question. This question, call it the epistemic trust question, is: how much should you trust our own epistemic faculties relative to the epistemic faculties of others? Answering the epistemic trust question is important for the epistemology of disagreement because it sheds light on the rational response question. My main aim in this paper is to argue—against recent attempts to show otherwise—that epistemic self-trust does not provide a reason for remaining steadfast in doxastic disagreements with others.
Thomas Kroedel has recently proposed an interesting Pareto-style condition on permissible belief. Despite the condition’s initial plausibility, this paper aims at providing a counterexample to it. The example is based on the view that a proper condition on permissible belief should not give permission to believe a proposition that undermines one’s belief system or whose epistemic standing decreases in the light of one’s de facto beliefs.
Logic arguably plays a role in the normativity of reasoning. In particular, there are plausible norms of belief/disbelief whose antecedents are constituted by claims about what follows from what. But is logic also relevant to the normativity of agnostic attitudes? The question here is whether logical entailment also puts constraints on what kinds of things one can suspend judgment about. In this paper I address that question and I give a positive answer to it. In particular, I advance two logical norms of agnosticism, where the first one allows us to assess situations in which the subject is agnostic about the conclusion of a valid argument and the second one allows us to assess situations in which the subject is agnostic about one of the premises of a valid argument.
In this paper, I articulate an argument for incompatibilism about moral responsibility and determinism. My argument comes in the form of an extended story, modeled loosely on Peter van Inwagen’s “rollback argument” scenario. I thus call it “the replication argument.” As I aim to bring out, though the argument is inspired by so-called “manipulation” and “original design” arguments, the argument is not a version of either such argument—and plausibly has advantages over both. The result, I believe, is a more convincing incompatibilist argument than those we have considered previously.
Although there is increasing interest in philosophy of science in transcendental reasoning, there is hardly any discussion about transcendental arguments. Since this might be related to the dominant understanding of transcendental arguments as a tool to defeat epistemological skepticism, and since the power of transcendental arguments to achieve this goal has convincingly been disputed by Barry Stroud, this contribution proposes, first, a new definition of the transcendental argument which allows its presentation in a simple modus ponens and, second, a pragmatist re-interpretation of this argument form that leaves it to the scientific community to debate, criticize, refine, or reaffirm its core claim: a premise which claims that the truth of a certain assumption is a necessary condition for something that is generally accepted. The proposed “logico-pragmatist interpretation” highlights the role of transcendental arguments as a methodological step to move science forward, just as abduction and inference to the best explanation do.
Does willful ignorance mitigate blameworthiness? In many legal systems, willfully ignorant wrongdoers are considered as blameworthy as knowing wrongdoers. This is called the ‘equal culpability thesis’ (ECT). Given that legal practice depends on it, the issue has obvious importance. Interestingly enough, however, there exists hardly any philosophical reflection on ECT. A recent exception is Alexander Sarch, who defends a restricted version of ECT. On Sarch’s view, ECT is true whenever willfully ignorant agents incur additional blameworthiness for their ignorance. In this paper, I defend an alternative view, according to which ECT is true whenever the motives of willfully ignorant and knowing wrongdoers are equally bad.