Overlapping consensus in pluralist societies: simulating Rawlsian full reflective equilibrium
The fact of reasonable pluralism in liberal democracies threatens the stability of such societies. John Rawls proposed a solution to this problem: The different comprehensive moral doctrines endorsed by the citizens overlap on a shared political conception of justice, e.g. his justice as fairness. Optimally, accepting the political conception is for each citizen individually justified by the method of wide reflective equilibrium. If this holds, society is in full reflective equilibrium. Rawls does not in detail investigate the conditions under which a full reflective equilibrium is possible or likely. This paper outlines a new strategy for addressing this open question by using the formal model of reflective equilibrium recently developed by Beisbart et al. First, it is argued that a bounded rationality perspective is appropriate which requires certain changes in the model. Second, the paper rephrases the open question about Rawlsian full reflective equilibrium in terms of the model. The question is narrowed down by focusing on the inferential connections between comprehensive doctrines and political conception. Rawls himself makes a demanding assumption about which connections are necessary for a full reflective equilibrium. Third, the paper presents a simulation study design that is focused on simplicity. The results are discussed, they fit with Rawls's assumption. However, because of the strong idealisations, they provide a useful benchmark rather than a final answer. The paper presents suggestions for more elaborate study designs.
Does reflective equilibrium help us converge?
I address the worry that reflective equilibrium is too weak as an account of justification because it fails to let differing views converge. I take up informal aspects of convergence and operationalise them in a formal model of reflective equilibrium. This allows for exploration by the means of computer simulation. Findings show that the formal model does not yield unique outputs, but still boosts agreement. I conclude from this that reflective equilibrium is best seen as a pluralist account of justification that cannot be accused of resulting in an "anything goes" relativism.
Aristotelian universals, strong immanence, and construction
The Aristotelian view of universals, according to which each universal generically depends for its existence on its instantiations, has recently come under attack by a series of ground-theoretic arguments. The last such arguments, presented by Raven, promises to offer several significant improvements over its predecessors, such as avoiding commitment to the transitivity of ground and offering new reasons for the metaphysical priority of universals over their instantiations. In this paper, we argue that Raven's argument does not effectively avoid said commitment and that Raven's new reasons fail. Moreover, we present a novel ground-theoretic interpretation of the Aristotelian view, referred to as strong immanence, and introduce a new argument against the Aristotelian view, intended to sidestep any commitment to the transitivity of ground.
Performative updates and the modeling of speech acts
This paper develops a way to model performative speech acts within a framework of dynamic semantics. It introduces a distinction between performative and informative updates, where informative updates filter out indices of context sets (cf. Stalnaker, Cole (ed), Pragmatics, Academic Press, 1978), whereas performative updates change their indices (cf. Szabolcsi, Kiefer (ed), Hungarian linguistics, John Benjamins, 1982). The notion of index change is investigated in detail, identifying implementations by a function or by a relation. Declarations like are purely performative updates that just enforce an index change on a context set. Assertions like are analyzed as combinations of a performative update that introduces a guarantee of the speaker for the truth of the proposition, and an informative update that restricts the context set so that this proposition is true. The first update is the illocutionary act characteristic for assertions; the second is the primary perlocutionary act, and is up for negotiations with the addressee. Several other speech acts will be discussed, in particular commissives, directives, exclamatives, optatives, and definitions, which are all performative, and differ from related assertions. The paper concludes a discussion of locutionary acts, which are modelled as index changers as well, and proposes a novel analysis for the performative marker
A plea for descriptive social ontology
Social phenomena-quite like mental states in the philosophy of mind-are often regarded as potential troublemakers from the start, particularly if they are approached with certain explanatory commitments, such as naturalism or social individualism, already in place. In this paper, we argue that such explanatory constraints should be at least initially bracketed if we are to arrive at an adequate non-biased description of social phenomena. Legitimate explanatory projects, or so we maintain, such as those of making the social world fit within the natural world with the help of, e.g., collective intentionality, social individualism, and the like, should neither exclude nor influence the prior description of social phenomena. Just as we need a description of the mental that is not biased, for example, by (anti)physicalist constraints, we need a description of the social that is not biased, for example, by (anti)individualist or (anti)naturalist commitments. Descriptive social ontology, as we shall conceive of it, is not incompatible with the adoption of explanatory frameworks in social ontology; rather, the descriptive task, according to our conception, ought to be recognized as prior to the explanatory project in the order of inquiry. If social phenomena are, for example, to be reduced to nonsocial (e.g., psychological or physical) phenomena, we need first to understand clearly what the social candidates for the reduction in question are. While such descriptive or naïve approaches have been influential in general metaphysics (see Fine 2017), they have so far not been prominent in analytic social ontology (though things are different outside of analytic philosophy, see esp. Reinach (1913). In what follows, we shall outline the contours of a descriptive approach by arguing, first, that description and explanation need to be distinguished as two distinct ways of engaging with social phenomena. Secondly, we defend the claim that the descriptive project ought to be regarded as prior to the explanatory project in the order of inquiry. We begin, in Section 2, by considering two different ways of engaging with mental phenomena: a descriptive approach taken by descriptive psychology and an explanatory approach utilized in analytic philosophy of mind. We take these two ways of approaching the study of the mind to be analogous to the distinction we want to draw in social ontology between a descriptive and an explanatory approach to the study of social phenomena. We consider next, in Section 3, how our approach compares to neighboring perspectives that are familiar to us from general metaphysics and philosophy more broadly, such as Aristotle's emphasis on "saving the appearances", Strawson's distinction between descriptive and revisionary metaphysics, as well as Fine's contrast between naïve and foundational metaphysics. In Section 4, we apply the proposed descriptive/explanatory distinction to the domain of social ontology and argue that descriptive social ontology ought to take precedence in the order of inquiry over explanatory social ontology. Finally, in Section 5, we consider and respond to several objections to which our account might seem to be susceptible.
The individuation of mathematical objects
Against mathematical platonism, it is sometimes objected that mathematical objects are mysterious. One possible elaboration of this objection is that the individuation of mathematical objects cannot be adequately explained. This suggests that facts about the numerical identity and distinctness of mathematical objects require an explanation, but that their supposed nature precludes us from providing one. In this paper, we evaluate this nominalist objection by exploring three ways in which mathematical objects may be individuated: by the intrinsic properties they possess, by the relations they stand in, and by their underlying 'substance'. We argue that only the third mode of individuation raises metaphysical problems that could substantiate the claim that mathematical objects are somehow mysterious. Since the platonist is under no obligation to accept this thesis over the alternatives, we conclude that, at least as far as individuation is concerned, the nominalist objection has no bite.
Natural language syntax complies with the free-energy principle
Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating syntactic objects. We argue that recently proposed principles of economy in language design-such as "minimal search" criteria from theoretical syntax-adhere to the FEP. This affords a greater degree of explanatory power to the FEP-with respect to higher language functions-and offers linguistics a grounding in first principles with respect to computability. While we mostly focus on building new principled conceptual relations between syntax and the FEP, we also show through a sample of preliminary examples how both tree-geometric depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression algorithm) can be used to accurately predict legal operations on syntactic workspaces, directly in line with formulations of variational free energy minimization. This is used to motivate a general principle of language design that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of linguists with the normative account of self-organization furnished by the FEP, by marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference.
Realism and the point at infinity: The end of the line?
Philosophers of mathematics often rely on the historical progress of mathematics in support of mathematical realism. These histories typically build on formal semantic tools to evaluate the changes in mathematics, and on these bases present later mathematical concepts as refined versions of earlier concepts which are taken to be vague. Claiming that this view does not apply to mathematical concepts in general, we present a case-study concerning projective geometry, for which we apply the tools of cognitive linguistics to analyse the developmental trajectory of the domain. On the basis of this analysis, we argue for the existence of two conceptually incompatible inferential structures, occurring at distinct moments in history, both of which yield the same projective geometric theorems; the first invoked by the French mathematicians Girard Desargues (1591-1661) and Jean-Victor Poncelet (1788-1867), and the second characterising a specific modern mode. We demonstrate that neither of these inferential structures can be considered as a refinement of the other. This case of conceptual development presents an issue to the standard account of progress and its bearing on mathematical realism. Our analysis suggests that the features that distinguish the underlying conceptually incompatible inferential structures are invisible to the standard application of the tools of formal semantics. Thus this case-study stands as an example of the manner and necessity of linguistics-specifically cognitive linguistics-to inform the philosophy of mathematics.
Mandevillian vices
Bernard Mandeville argued that traits that have traditionally been seen as detrimental or reprehensible, such as greed, ambition, vanity, and the willingness to deceive, can produce significant social goods. He went so far as to suggest that a society composed of individuals who embody these vices would, under certain constraints, be better off than one composed only of those who embody the virtues of self-restraint. In the twentieth century, Mandeville's insights were taken up in economics by John Maynard Keynes, among others. More recently, philosophers have drawn analogies to Mandeville's ideas in the domains of epistemology and morality, arguing that traits that are typically understood as epistemic or moral vices (e.g. closed-mindedness, vindictiveness) can lead to beneficial outcomes for the groups in which individuals cooperate, deliberate, and decide, for instance by propitiously dividing the cognitive labor involved in critical inquiry and introducing transient diversity. We argue that mandevillian virtues have a negative counterpart, mandevillian vices, which are traits that are beneficial to or admirable in their individual possessor, but are or can be systematically detrimental to the group to which that individual belongs. Whilst virtue ethics and epistemology prescribe character traits that are good for every moral and epistemic agent, and ideally across all situations, mandevillian virtues show that group dynamics can complicate this picture. In this paper, we provide a unifying explanation of the main mechanism responsible for mandevillian traits in general and motivate the case for the opposite of mandevillian virtues, namely mandevillian vices.
Is the fine-tuning evidence for a multiverse?
Our best current science seems to suggest the laws of physics and the initial conditions of our universe are fine-tuned for the possibility of life. A significant number of scientists and philosophers believe that the fine-tuning is evidence for the multiverse hypothesis. This paper will focus on a much-discussed objection to the inference from the fine-tuning to the multiverse: the charge that this line of reasoning commits the inverse gambler's fallacy. Despite the existence of a literature going back decades, this philosophical debate has made little contact with scientific discussion of fine-tuning and the multiverse, which mainly revolves around a specific form of the multiverse hypothesis rooted in eternal inflation combined with string theory. Because of this, potentially important implications from science to philosophy, and vice versa, have been left underexplored. In this paper, I will take a first step at joining up these two discussions, by arguing that attention to the eternal inflation + string theory conception of the multiverse supports the inverse gambler's fallacy charge. It does this by supporting the idea that our universe is contingently fine-tuned, thus addressing the concern that proponents of the inverse gambler's fallacy charge have assumed this without argument.
Salient semantics
Semantic features are components of concepts. In philosophy, there is a predominant focus on those features that are necessary (and jointly sufficient) for the application of a concept. Consequently, the method of cases has been the paradigm tool among philosophers, including experimental philosophers. However, whether a feature is salient is often far more important for cognitive processes like memory, categorization, recognition and even decision-making than whether it is necessary. The primary objective of this paper is to emphasize the significance of researching salient features of concepts. I thereby advocate the use of semantic feature production tasks, which not only enable researchers to determine whether a feature is salient, but also provide a complementary method for studying ordinary language use. I will discuss empirical data on three concepts, conspiracy theory, female/male professor, and life, to illustrate that semantic feature production tasks can help philosophers (a) identify those salient features that play a central role in our reasoning about and with concepts, (b) examine socially relevant stereotypes, and (c) investigate the structure of concepts.
Partners in crime? Radical scepticism and malevolent global conspiracy theories
Although academic work on conspiracy theory has taken off in the last two decades, both in other disciplines as well as in epistemology, the similarities between global sceptical scenarios and global conspiracy theories have not been the focus of attention. The main reason for this lacuna probably stems from the fact that most philosophers take radical scepticism very seriously, while, for the most part, regarding 'conspiracy thinking' as epistemically defective. Defenders of conspiracy theory, on the other hand, tend not to be that interested in undermining radical scepticism, since their primary goal is to save conspiracy theories from the charges of irrationality. In this paper, I argue that radical sceptical scenarios and global conspiracy theories exhibit importantly similar features, which raises a serious dilemma for the 'orthodox' view that holds that while we must respond to radical scepticism, global conspiracy theories can just be dismissed. For, if, as I will show, both scenarios can be seen to be epistemically on a par, then either radical sceptical scenarios are as irrational as global conspiracy theories or neither type of scenario is intrinsically irrational. I argue for the first option by introducing a distinction between 'local' and 'global' sceptical scenarios and showing how this distinction maps onto contemporary debates concerning how best to understand the notion of a 'conspiracy theory'. I demonstrate that, just as in the case of scepticism, 'local' conspiracies are, at least in principle, detectable and, hence, epistemically unproblematic, while global conspiracy theories, like radical scepticism, are essentially invulnerable to any potential counterevidence. This renders them theoretically vacuous and idle, as everything and nothing is compatible with what these 'theories' assert. I also show that radical sceptical scenarios and global conspiracy theories face the self-undermining problem: As soon as global unreliability is posited, the ensuing radical doubt swallows its children - the coherence of the sceptic's proposal or the conspiracy theorist's preferred conspiracy. I conclude that radical sceptical scenarios and global conspiracy theories are indeed partners in crime and should, therefore, be regarded as equally dubious.
Ethno-racial categorisations for biomedical studies: the fair selection of research participants and population stratification
We argue that there are neither scientific nor social reasons to require gathering ethno-racial data, as defined in the US legal regulations if researchers have no prior hypotheses as to how to connect this type of categorisation of human participants of clinical trials with any mechanisms that could explain alleged interracial health differences and guide treatment choice. Although we agree with the normative perspective embedded in the calls for the fair selection of participants for biomedical research, we demonstrate that current attempts to provide and elucidate the criteria for the fair selection of participants, in particular, taking into account ethno-racial categories, overlook important epistemic and normative challenges to implement the results of such race-sorting requirements. We discuss existing arguments for and against gathering ethno-racial statistics for biomedical research and present a new one that refers to the assumption that prediction is epistemically superior to accommodation. We also underline the importance of closer interaction between research ethics and the methodology of biomedicine in the case of population stratifications for medical research, which requires weighing non-epistemic values with methodological constraints.
A non-ideal approach to slurs
Philosophers of language are increasingly engaging with derogatory terms or slurs. Only few theorists take such language as a starting point for addressing puzzles in philosophy of language with little connection to our real-world problems. This paper aims to show that the political nature of derogatory language use calls for non-ideal theorising as we find it in the work of feminist and critical race scholars. Most contemporary theories of slurs, so I argue, fall short on some desiderata associated with a non-ideal approach. They neglect crucial linguistic or political aspects of morally and politically significant meaning. I argue that a two-stage project is necessary to understand the perniciousness of slurs: accounting for the derogatory content of derogatory terms in general and, additionally, explaining the communicative function of slurs more specifically. I end by showing how inferentialism is well-suited to account for the content of derogatory terms whilst allowing for further explanations of the communicative functions of slurs.
Logicality and the picture theory of language
I argue that there is tension in Wittgenstein's position on trivialities (i.e. tautologies and contradictions) in the Tractatus, as it contains the following claims: (A) sentences are pictures; (B) trivialties are not pictures; (C) trivialities are sentences. A and B follow from the "picture theory" of language which Wittgenstein proposes, while C contradicts it. I discuss a way to resolve this tension in light of Logicality, a hypothesis recently developed in linguistic research. Logicality states that trivialities are excluded by the grammar, and that under the right analysis sentences which look trivial are in fact contingent. The tools necessary to support Logicality, I submit, were not available to Wittgenstein at the time, which explains his commitment to C. I end the paper by commenting on some points of contact between analytic philosophy and the generative enterprise in linguistics which are brought into relief by the discussion.
"One more time": time loops as a tool to investigate folk conceptions of moral responsibility and human agency
In the past 20 years, experimental philosophers have investigated folk intuitions about free will and moral responsibility, and their compatibility with determinism. To determine whether laypeople are "natural compatibilists" or "natural incompatibilists", they have used vignettes describing agents living in deterministic universes. However, later research has suggested that participants' answers to these studies are plagued with comprehension errors: either people fail to really accept that these universes are deterministic, or they confuse determinism with something else. This had led certain experimenters to conclude that maybe folk intuitions about the compatibility of free will with determinism could not be empirically investigated. Here, we propose that we should refrain from embracing this pessimistic conclusion, as scenarios involving time loops might allow experiments to bypass most of these methodological issues. Indeed, scenarios involving time loops belong both to the philosophical literature on free will and to popular culture. As such, they might constitute a bridge between the two worlds. We present the results of five studies using time loops to investigate people's intuitions about determinism, free will and moral responsibility. The results of these studies allow us to reach two conclusions. The first is that, when people are introduced to determinism through time loops, they do seem to understand what determinism entails. The second is that, at least in the context of time loops, people do not seem to consider determinism to be incompatible with free will and moral responsibility.
How to lose your memory without losing your money: shifty epistemology and Dutch strategies
An objection to shifty epistemologies such as subject-sensitive invariantism is that it predicts that agents are susceptible to guaranteed losses. Bob Beddor (Analysis, 81, 193-198, 2021) argues that these guaranteed losses are not a symptom of irrationality, on the grounds that forgetful agents are susceptible to guaranteed losses without being irrational. I agree that forgetful agents are susceptible to guaranteed losses without being irrational- but when we investigate why, the analogy with shifty epistemology breaks down. I argue that agents with shifty epistemologies are susceptible to guaranteed losses in a way which is a symptom of irrationality. Along the way I make a suggestion about what it takes for an agent to be coherent over time. I close by offering a taxonomy of shifty epistemologies.
Laws beyond spacetime
Quantum gravity's suggestion that spacetime may be emergent and so only exist contingently would force a radical reconception of extant analyses of laws of nature. Humeanism presupposes a spatiotemporal mosaic of particular matters of fact on which laws supervene; primitivism and dispositionalism conceive of the action of primitive laws or of dispositions as a process of 'nomic production' unfolding over time. We show how the Humean supervenience basis of non-modal facts and primitivist or dispositionalist accounts of nomic production can be reconceived, avoiding a reliance on fundamental spacetime. However, it is unclear that naturalistic forms of Humeanism can maintain their commitment to there being no necessary connections among distinct entities. Furthermore, non-temporal conceptions of production render this central concept more elusive than before. In fact, the challenges run so deep that the survival of the investigated analyses into the era of quantum gravity is questionable.
Bad social norms rather than bad believers: examining the role of social norms in bad beliefs
People with bad beliefs - roughly beliefs that conflict with those of the relevant experts and are maintained regardless of counter-evidence - are often cast as bad believers. Such beliefs are seen to be the result of, e.g., motivated or biased cognition and believers are judged to be epistemically irrational and blameworthy in holding them. Here I develop a novel framework to explain why people form bad beliefs. People with bad beliefs follow the social epistemic norms guiding how agents are supposed to form and share beliefs within their respective communities. Beliefs go bad because these norms aren't reliably knowledge-conducive. In other words, bad beliefs aren't due to bad believers but due bad social epistemic norms. The framework also unifies different explanations of bad beliefs, is testable and provides distinct interventions to combat such beliefs. The framework also helps to capture the complex and often contextual normative landscape surrounding bad beliefs more adequately. On this picture, it's primarily groups that are to be blamed for bad beliefs. I also suggest that some individuals will be blameless for forming their beliefs in line with their group's norms, whereas others won't be. And I draw attention to the factors that influence blameworthiness-judgements in these contexts.
Back by popular demand, ontology: Productive tensions between anthropological and philosophical approaches to ontology
In this paper we analyze relations between in anthropology and philosophy beyond simple homonymy or synonymy and show how this diagnosis allows for new interdisciplinary links and insights, while minimizing the risk of cross-disciplinary equivocation. We introduce the ontological turn in anthropology as an intellectual project rooted in the critique of dualism of culture and nature and propose a classification of the literature we reviewed into first-order claims about the world and second-order claims about ontological frameworks. Next, rather than provide a strict definition of in anthropological literature, we argue that the term is used as a heuristic addressing a web of sub-concepts relating to interpretation, knowledge, and self-determination which correspond to methodological, epistemic, and political considerations central to the development of the ontological turn. We present a case study of rivers as persons to demonstrate what the ontological paradigm in anthropology amounts to in practice. Finally, in an analysis facilitated by a parallel between the first- and second-order claims in anthropology, and and meta- in philosophy (respectively), we showcase the potential for contribution of ontological anthropology to contemporary philosophical debates, such as ontological gerrymandering, relativism and social , and vice versa.
Concept-formation and deep disagreements in theoretical and practical reasoning
This paper explores the idea that deep disagreements essentially involve disputes about what counts as good reasoning, whether it is theoretical or practical reasoning. My central claim is that deep disagreements involve radically different paradigms of some principle or notion that is constitutively basic to reasoning-I refer to these as "basic concepts". To defend this claim, I show how we can understand deep disagreements by accepting the indeterminacy of concept-formation: concepts are not set in stone but are responsive to human needs, and differences in individuating and ordering concepts lead to clashes in paradigms of reasoning. These clashes can be difficult to resolve because linguistic concepts, especially basic concepts, impose a normative structure onto thought to make reasoning possible at all. This, I also argue, is an authentically Wittgensteinian account of the nature of reasoning. While deep disagreements involving theoretical and practical reasoning both stem from the same root problem of clashing paradigms of basic concepts, I will also draw attention to the particularly radical indeterminacy of moral concept-formation, which makes moral deep disagreements more difficult to resolve. Over the course of the paper, I will discuss two examples of deep disagreements to illustrate and defend my central claim: deep disagreements over vaccines and the concept of "evidence" (theoretical reasoning) and deep disagreements over affirmative action and the concept of "fairness" (practical reasoning). I conclude by suggesting how my account of reasoning does not lead to moral relativism.