Resource-rational contractualism: A triple theory of moral cognition
It is widely agreed upon that morality guides people with conflicting interests towards agreements of mutual benefit. We therefore might expect numerous proposals for organizing human moral cognition around the logic of bargaining, negotiation, and agreement. Yet, while "contractualist" ideas play an important role in moral philosophy, they are starkly underrepresented in the field of moral psychology. From a contractualist perspective, ideal moral judgments are those that would be agreed to by rational bargaining agents-an idea with wide-spread support in philosophy, psychology, economics, biology, and cultural evolution. As a practical matter, however, investing time and effort in negotiating every interpersonal interaction is unfeasible. Instead, we propose, people use abstractions and heuristics to efficiently identify mutually beneficial arrangements. We argue that many well-studied elements of our moral minds, such as reasoning about others' utilities ("consequentialist" reasoning) or evaluating intrinsic ethical properties of certain actions ("deontological" reasoning), can be naturally understood as resource-rational approximations of a contractualist ideal. Moreover, this view explains the flexibility of our moral minds-how our moral rules and standards get created, updated and overridden and how we deal with novel cases we have never seen before. Thus, the apparently fragmentary nature of our moral psychology-commonly described in terms of systems in conflict-can be largely unified around the principle of finding mutually beneficial agreements under resource constraint. Our resulting "triple theory" of moral cognition naturally integrates contractualist, consequentialist and deontological concerns.
Two tiers, not one: Different sources of extrinsic mortality have opposing effects on life history traits
Guided by concepts from life history (LH) theory, a large human research literature has tested the hypothesis that exposures to extrinsic mortality (EM) promote the development of faster LH strategies (e.g., earlier/faster reproduction, higher offspring number). A competing model proposes that, because EM in the past was intimately linked to energetic constraints, such exposures specifically led to the development of slower LH strategies. We empirically address this debate by examining (1) LH variation among small-scale societies under different environmental conditions; (2) country-, regional- and community-level correlations between ecological conditions, mortality, maturational timing, and fertility; (3) individual-level correlations between this same set of factors; and (4) natural experiments leveraging the impact of externally-caused changes in mortality on LH traits. Partially supporting each model, we found that harsh conditions encompassing energetic stress and ambient cues to EM (external cues received through sensory systems) have countervailing effects on the development of LH strategies, both delaying pubertal maturation and promoting an accelerated pace of reproduction and higher offspring number. We conclude that, although energetics are fundamental to many developmental processes, providing a first tier of environmental influence, this first tier alone cannot explain these countervailing effects. An important second tier of environmental influence is afforded by ambient cues to EM. We advance a 2-tiered model that delineates this second tier and its central role in regulating development of LH strategies. Consideration of the first and second tier together is necessary to account for the observed countervailing shifts toward both slower and faster LH traits.
A multi-trait embodied framework for the evolution of brains and cognition across animal phyla
Among non-human animals, crows, octopuses and honeybees are well-known for their complex brains and cognitive abilities. Widening the lens from the idiosyncratic abilities of exemplars like these to those of animals across the phylogenetic spectrum begins to reveal the ancient evolutionary process by which complex brains and cognition first arose in different lineages. The distribution of 35 phenotypic traits in 17 metazoan lineages reveals that brain and cognitive complexity in only three lineages (vertebrates, cephalopod mollusks, and euarthropods) can be attributed to the pivotal role played by body, sensory, brain and motor traits in active visual sensing and visuomotor skills. Together, these pivotal traits enabled animals to transition from largely reactive to more proactive behaviors, and from slow and two-dimensional motion to more rapid and complex three-dimensional motion. Among pivotal traits, high-resolution eyes and laminated visual regions of the brain stand out because they increased the processing demands on and the computational power of the brain by several orders of magnitude. The independent acquisition of pivotal traits in cognitively complex (CC) lineages can be explained as the completion of several multi-trait transitions over the course of evolutionary history, each resulting in an increasing level of complexity that arises from a distinct combination of traits. Whereas combined pivotal traits represent the highest level of complexity in CC lineages, combined traits at lower levels characterize many non-CC lineages, suggesting that certain body, sensory and brain traits may have been linked (the trait-linkage hypothesis) during the evolution of both CC and non-CC lineages.
Cognitive Representations of Social Relationships and their Developmental Origins
In the human mind, what is a social relationship, and what are the developmental origins of this representation? I consider findings from infant psychology and propose that our representations of social relationships are intuitive theories built on core knowledge. I propose three central components of this intuitive theory. The purpose of the first component is to recognize whether a relationship exists, the purpose of the second is to characterize the relationship by categorizing it into a model and to compute its strength (i.e., intensity, pull, or thickness), and the purpose of the third is to understand how to change relationships through explicit or implicit communication. I propose that infants possess core knowledge on which this intuitive theory is built. This paper focuses on the second component and considers evidence that infants characterize relationships. Following Relational Models Theory (A. P. Fiske, 1992, 2004) I propose that from infancy humans recognize relationships that belong to three models: communal sharing (where people are 'one'), authority ranking (where people are ranked), and equality matching (where people are separate, but evenly balanced). I further propose that humans, and potentially infants, recognize a relationship's strength which can be thought of as a continuous representation of obligations (the extent to which certain actions are expected), and commitment (the likelihood that people will continue the relationship). These representations and the assumption that others share them allow us to form, maintain, and change social relationships throughout our lives by informing how we interpret and evaluate the actions of others and plan our own.
Meta-learning and the evolution of cognition
Meta-learning offers a promising framework to make sense of some parts of decision-making that have eluded satisfactory explanation. Here, we connect this research to work in animal behaviour and cognition in order to shed light on how and whether meta-learning could help us to understand the evolution of cognition.
Meta-learning in active inference
Binz et al. propose meta-learning as a promising avenue for modelling human cognition. They provide an in-depth reflection on the advantages of meta-learning over other computational models of cognition, including a sound discussion on how their proposal can accommodate neuroscientific insights. We argue that active inference presents similar computational advantages while offering greater mechanistic explanatory power and biological plausibility.
Meta-learning modeling and the role of affective-homeostatic states in human cognition
The meta-learning framework proposed by Binz et al. would gain significantly from the inclusion of affective and homeostatic elements, currently neglected in their work. These components are crucial as cognition as we know it is profoundly influenced by affective states, which arise as intricate forms of homeostatic regulation in living bodies.
Meta-learning goes hand-in-hand with metacognition
Binz et al. propose a general framework for meta-learning and contrast it with built-by-hand Bayesian models. We comment on some architectural assumptions of the approach, its relation to the active inference framework, its potential applicability to living systems in general, and the advantages of the latter in addressing the explanation problem.
Challenges of meta-learning and rational analysis in large worlds
We challenge Binz et al.'s claim of meta-learned model superiority over Bayesian inference for large world problems. While comparing Bayesian priors to model-training decisions, we question meta-learning feature exclusivity. We assert no special justification for rational Bayesian solutions to large world problems, advocating exploring diverse theoretical frameworks beyond rational analysis of cognition for research advancement.
Bayes beyond the predictive distribution
Binz et al. argue that meta-learned models offer a new paradigm to study human cognition. Meta-learned models are proposed as alternatives to Bayesian models based on their capability to learn identical posterior predictive distributions. In our commentary, we highlight several arguments that reach beyond a predictive distribution-based comparison, offering new perspectives to evaluate the advantages of these modeling paradigms.
Learning and memory are inextricable
The authors' aim is to build "more biologically plausible learning algorithms" that work in naturalistic environments. Given that, first, human learning and memory are inextricable, and, second, that much human learning is unconscious, can the authors' first research question of how people improve their learning abilities over time be answered without addressing these two issues? I argue that it cannot.
Quo vadis, planning?
Deep meta-learning is the driving force behind advances in contemporary AI research, and a promising theory of flexible cognition in natural intelligence. We agree with Binz et al. that many supposedly "model-based" behaviours may be better explained by meta-learning than by classical models. We argue that this invites us to revisit our neural theories of problem solving and goal-directed planning.
Quantum Markov blankets for meta-learned classical inferential paradoxes with suboptimal free energy
Quantum active Bayesian inference and quantum Markov blankets enable robust modeling and simulation of difficult-to-render natural agent-based classical inferential paradoxes interfaced with task-specific environments. Within a non-realist cognitive completeness regime, quantum Markov blankets ensure meta-learned irrational decision making is fitted to explainable manifolds at optimal free energy, where acceptable incompatible observations or temporal Bell-inequality violations represent important verifiable real-world outcomes.
Meta-learning: Bayesian or quantum?
Abundant experimental evidence illustrates violations of Bayesian models across various cognitive processes. Quantum cognition capitalizes on the limitations of Bayesian models, providing a compelling alternative. We suggest that a generalized quantum approach in meta-learning is simultaneously more robust and flexible, as it retains all the advantages of the Bayesian framework while avoiding its limitations.
Meta-learned models beyond and beneath the cognitive
I propose that meta-learned models, and in particular the situation-aware deployment of "learning-to-infer" modules can be advantageously extended to domains commonly thought to lie outside the cognitive, such as motivations and preferences on one hand, and the effectuation of micro- and coping-type behaviors.
Is human compositionality meta-learned?
Recent studies suggest that meta-learning may provide an original solution to an enduring puzzle about whether neural networks can explain compositionality - in particular, by raising the prospect that compositionality can be understood as an emergent property of an inner-loop learning algorithm. We elaborate on this hypothesis and consider its empirical predictions regarding the neural mechanisms and development of human compositionality.
The added value of affective processes for models of human cognition and learning
Building on the affectivism approach, we expand on Binz et al.'s meta-learning research program by highlighting that emotion and other affective phenomena should be key to the modeling of human learning. We illustrate the added value of affective processes for models of learning across multiple domains with a focus on reinforcement learning, knowledge acquisition, and social learning.
Meta-learned models as tools to test theories of cognitive development
Binz et al. argue that meta-learned models are essential tools for understanding adult cognition. Here, we propose that these models are particularly useful for testing hypotheses about why learning processes change across development. By leveraging their ability to discover optimal algorithms and account for capacity limitations, researchers can use these models to test competing theories of developmental change in learning.
Combining meta-learned models with process models of cognition
Meta-learned models of cognition make optimal predictions for the actual stimuli presented to participants, but investigating judgment biases by constraining neural networks will be unwieldy. We suggest combining them with cognitive process models, which are more intuitive and explain biases. Rational process models, those that can sequentially sample from the posterior distributions produced by meta-learned models, seem a natural fit.
Meta-learning as a bridge between neural networks and symbolic Bayesian models
Meta-learning is even more broadly relevant to the study of inductive biases than Binz et al. suggest: Its implications go beyond the extensions to rational analysis that they discuss. One noteworthy example is that meta-learning can act as a bridge between the vector representations of neural networks and the symbolic hypothesis spaces used in many Bayesian models.
Meta-learning: Data, architecture, and both
We are encouraged by the many positive commentaries on our target article. In this response, we recapitulate some of the points raised and identify synergies between them. We have arranged our response based on the tension between data and architecture that arises in the meta-learning framework. We additionally provide a short discussion that touches upon connections to foundation models.
The hard problem of meta-learning is what-to-learn
Binz et al. highlight the potential of meta-learning to greatly enhance the flexibility of AI algorithms, as well as to approximate human behavior more accurately than traditional learning methods. We wish to emphasize a basic problem that lies underneath these two objectives, and in turn suggest another perspective of the required notion of "meta" in meta-learning: knowing what to learn.
Linking meta-learning to meta-structure
We propose that a principled understanding of meta-learning, as aimed for by the authors, benefits from linking the focus on learning with an equally strong focus on structure, which means to address the question: What are the meta-structures that can guide meta-learning?
The meta-learning toolkit needs stronger constraints
The implementation of meta-learning targeted by Binz et al. inherits benefits and drawbacks from its nature as a connectionist model. Drawing from historical debates around bottom-up and top-down approaches to modeling in cognitive science, we should continue to bridge levels of analysis by constraining meta-learning and meta-learned models with complementary evidence from across the cognitive and computational sciences.
Integrative learning in the lens of meta-learned models of cognition: Impacts on animal and human learning outcomes
This commentary examines the synergy between meta-learned models of cognition and integrative learning in enhancing animal and human learning outcomes. It highlights three integrative learning modes - holistic integration of parts, top-down reasoning, and generalization with in-depth analysis - and their alignment with meta-learned models of cognition. This convergence promises significant advances in educational practices, artificial intelligence, and cognitive neuroscience, offering a novel perspective on learning and cognition.
The reinforcement metalearner as a biologically plausible meta-learning framework
We argue that the type of meta-learning proposed by Binz et al. generates models with low interpretability and falsifiability that have limited usefulness for neuroscience research. An alternative approach to meta-learning based on hyperparameter optimization obviates these concerns and can generate empirically testable hypotheses of biological computations.
Probabilistic programming versus meta-learning as models of cognition
We summarize the recent progress made by probabilistic programming as a unifying formalism for the probabilistic, symbolic, and data-driven aspects of human cognition. We highlight differences with meta-learning in flexibility, statistical assumptions and inferences about cogniton. We suggest that the meta-learning approach could be further strengthened by considering Connectionist Bayesian approaches, rather than exclusively one or the other.
Perceptual (roots of) core knowledge
Some core knowledge may be rooted in - or even - well-characterized mechanisms of mid-level visual perception and attention. In the decades since it was first proposed, this possibility has inspired (and has been supported by) several discoveries in both infant cognition and adult perception, but it also faces several challenges. To what degree does reflect how babies and ?
Questioning the nature and origins of the "social agent" concept
Spelke posits that the concept of "social agent," who performs object-directed actions to fulfill social goals, is the first noncore concept that infants acquire as they begin to learn their native language. We question this proposal on empirical grounds and theoretical grounds, and propose instead that the representation of object-mediated interactions may be supported by a dedicated prelinguistic mechanism.
How do babies come to know what babies know?
Elizabeth Spelke's is a scholarly presentation of core knowledge theory and a masterful compendium of empirical evidence that supports it. Unfortunately, Spelke's principal theoretical assumption is that core knowledge is simply the innate product of cognitive evolution. As such, her theory fails to explicate the developmental mechanisms underlying the emergence of the cognitive systems on which that knowledge depends.