In the United States, children are more likely than adults to condone discrimination
Discriminatory acts (i.e., harmful acts motivated by the victim's group membership) have outsized consequences for the victim and for society relative to similar harms committed for other reasons. Here, we investigated the development of children's evaluations of discrimination. Specifically, we asked whether children in the U.S., like adults, perceive discriminatory acts as distinctly harmful-that is, more harmful than identical acts that are not motivated by the victim's membership in a particular group. Across 4 studies, we examined children's (N = 588; ages 4-9 years) and adults' (N = 623) perceptions of discriminatory acts versus identical acts motivated by other, personal reasons (Studies 1 and 2). In contrast to adults, children-particularly younger ones-rated the discriminatory acts as least harmful. In addition, whereas adults rated discrimination motivated by the victim's membership in an unfamiliar social category (similar to gender or race) as more harmful than discrimination motivated by membership in an unfamiliar task-based group (a sports team), children did not (Study 3). Finally, both adults and older (but not younger) children rated discrimination against a member of a lower-status (vs. equal-status) group as more harmful (Study 4). These findings advance theory on the development of sociomoral cognition and provide new insight into how children perceive instances of discrimination and bias in their everyday lives.
Exploring the bounded rationality in human decision anomalies through an assemblable computational framework
Some seemingly irrational decision behaviors (anomalies), once seen as flaws in human cognition, have recently received explanations from a rational perspective. The basic idea is that the brain has limited cognitive resources to process the quantities (e.g., value, probability, time, etc.) required for decision making, with specific biases arising as byproducts of the resource allocation that is optimized for the environment. While appealing for providing normative accounts, the existing resource-rational models have limitations such as inconsistent assumptions across models, a focus on optimization for one specific aspect of the environment, and limited coverage of decision anomalies. One challenging anomaly is the peanuts effect, a pervasive phenomenon in decision-making under risk that implies an interdependence between the processing of value and probability. To extend the resource rationality approach to explain the peanuts effect, here we develop a computational framework-the Assemblable Resource-Rational Modules (ARRM)-that integrates ideas from different lines of boundedly-rational decision models as freely assembled modules. The framework can accommodate the joint functioning of multiple environmental factors, and allow new models to be built and tested along with the existing ones, potentially opening a wider range of decision phenomena to bounded rationality modeling. For one new and three published datasets that cover two different task paradigms and both the gain and loss domains, our boundedly-rational models reproduce two characteristic features of the peanuts effect and outperform previous models in fitting human decision behaviors.
Updating of information in working memory: Time course and consequences
Working memory updating is the process that replaces outdated content in working memory by new content. This requires removing outdated information and encoding new information. It is still unclear whether removal and encoding run sequentially or simultaneously. We explored this question in two experiments investigating the time course of removal and encoding and their consequences for the functioning of working memory. The updating task we used involved three phases: the initial encoding, the processing, and the retrieval phase. Across four conditions, we manipulated whether the processing phase involved encoding, removal, neither, or both (i.e., updating). In Experiment 1, processing time was self-paced, and we measured processing times in each condition. In Experiment 2, we measured accuracy as a function of available processing time. After the processing, participants were asked to recall the final item for each position in the retrieval phase. In combination, the results of the two experiments show that the time required for updating was shorter than the sum of encoding and removal time. Moreover, it was nearly the same as the time taken for either the encoding or removal process, indicating that encoding and removal are concurrent processes during updating. Additionally, we analyzed the proportion of correct responses and of different error types with a memory measurement model to investigate the effects of encoding and removal for information held in working memory. The analysis revealed that removal involves unbinding the outdated information from its context. However, despite the weakened bindings of information to its initial context, the outdated information still remains activated in working memory. Other information held in working memory benefitted little from removal of outdated information.
A position coding model that accounts for the effects of event boundaries on temporal order memory
Episodic memories, particularly temporal order memory, are influenced by event boundaries. Although numerous theoretical and computational models have been developed to explain this phenomenon, creating a model that can explain a wide range of behavioral data and is supported by neural evidence remains a significant challenge. This study presented a new model, grounded in ample evidence of position coding, to account for the impact of event boundaries on temporal order memory. The proposed model successfully simulated various behavioral effects in previous experiments measuring temporal order memory. Our model outperformed the context-resetting model in fitting all the data and capturing the full set of effects in previous and newly conducted experiments, including the boundary effect, the distance effect, the local primacy effect, and the absence of boundary number effect. These findings underscore a novel mechanism in which event boundaries affect temporal order memory by resetting the local position coding of events.
Building compressed causal models of the world
A given causal system can be represented in a variety of ways. How do agents determine which variables to include in their causal representations, and at what level of granularity? Using techniques from Bayesian networks, information theory, and decision theory, we develop a formal theory according to which causal representations reflect a trade-off between compression and informativeness, where the optimal trade-off depends on the decision-theoretic value of information for a given agent in a given context. This theory predicts that, all else being equal, agents prefer causal models that are as compressed as possible. When compression is associated with information loss, however, all else is not equal, and our theory predicts that agents will favor compressed models only when the information they sacrifice is not informative with respect to the agent's anticipated decisions. We then show, across six studies reported here (N=2,364) and one study reported in the supplemental materials (N=182), that participants' preferences over causal models are in keeping with the predictions of our theory. Our theory offers a unification of different dimensions of causal evaluation identified within the philosophy of science (proportionality and stability), and contributes to a more general picture of human cognition according to which the capacity to create compressed (causal) representations plays a central role.
Free time, sharper mind: A computational dive into working memory improvement
Extra free time improves working memory (WM) performance. This free-time benefit becomes larger across successive serial positions, a phenomenon recently labeled the "fanning-out effect". Different mechanisms can account for this phenomenon. In this study, we implemented these mechanisms computationally and tested them experimentally. We ran three experiments that varied the time people were allowed to encode items, as well as the order in which they recalled them. Experiment 1 manipulated the free-time benefit in a paradigm in which people recalled items either in forward or backward order. Experiment 2 used the same forward-backward recall paradigm coupled with a distractor task at the end of encoding. Experiment 3 used a cued recall paradigm in which items were tested in random order. In all three experiments, the best-fitting model of the free-time benefit included (1) a consolidation mechanism whereby a just-encoded item continues to be re-encoded as a function of the total free-time available and (2) a stabilization mechanism whereby items become more resistant to output interference with extra free time. Mechanisms such as decay and refreshing, as well as models based on the replenishment of encoding-resources, were not supported by our data.
Perceptual inference corrects function word errors in reading: Errors that are not noticed do not disrupt eye movements
Both everyday experience and laboratory research demonstrate that readers often fail to notice errors such as an omitted or repeated function word. This phenomenon challenges central tenets of reading and sentence processing models, according to which each word is lexically processed and incrementally integrated into a syntactic representation. One solution would propose that apparent failure to notice such errors reflects post-perceptual inference; the reader does initially perceive the error, but then unconsciously 'corrects' the perceived string. Such a post-perceptual account predicts that when readers fail to explicitly notice an error, the error will nevertheless disrupt reading, at least fleetingly. We present a large-scale eyetracking experiment investigating whether disruption is detectable in the eye movement record when readers fail to notice an omitted or repeated two-letter function word in naturalistic sentences. Readers failed to notice both omission and repetition errors over 36% of the time. In an analysis that included all trials, both omission and repetition resulted in pronounced eye movement disruption, compared to reading of grammatical control sentences. But in an analysis including only trials on which readers failed to notice the errors, neither type of error disrupted eye movements on any measure. Indeed, there was evidence in some measures that reading was relatively fast on the trials on which errors were missed. It does not appear that when an error is not consciously noticed, it is initially perceived, and then later corrected; rather, linguistic knowledge influences what the reader perceives.
Doing things efficiently: Testing an account of why simple explanations are satisfying
People often find simple explanations more satisfying than complex ones. Across seven preregistered experiments, we provide evidence that this simplicity preference is not specific to explanations and may instead arises from a broader tendency to prefer completing goals in efficient ways. In each experiment, participants (total N=2820) learned of simple and complex methods for producing an outcome, and judged which was more appealing-either as an explanation why the outcome happened, or as a process for producing it. Participants showed similar preferences across judgments. They preferred simple methods as explanations and processes in tasks with no statistical information about the reliability or pervasiveness of causal elements. But when this statistical information was provided, preferences for simple causes often diminished and reversed in both kinds of judgments. Together, these findings suggest that people may assess explanations much in the same ways they assess methods for completing goals, and that both kinds of judgments depend on the same cognitive mechanisms.
Ethical choice reversals
Understanding the systematic ways that human decision making departs from normative principles has been important in the development of cognitive theory across multiple decision domains. We focus here on whether such seemingly "irrational" decisions occur in ethical decisions that impose difficult tradeoffs between the welfare and interests of different individuals or groups. Across three sets of experiments and in multiple decision scenarios, we provide clear evidence that contextual choice reversals arise in multiples types of ethical choice settings, in just the way that they do in other domains ranging from economic gambles to perceptual judgments (Trueblood et al., 2013; Wedell, 1991). Specifically, we find within-participant evidence for attraction effects in which choices between two options systematically vary as a function of features of a third dominated and unchosen option-a prima facie violation of rational choice axioms that demand consistency. Unlike economic gambles and most domains in which such effects have been studied, many of our ethical scenarios involve features that are not presented numerically, and features for which there is no clear majority-endorsed ranking. We provide empirical evidence and a novel modeling analysis based on individual differences of feature rankings within attributes to show that such individual variations partly explains observed variation in the attraction effects. We conclude by discussing how recent computational analyses of attraction effects may provide a basis for understanding how the observed patterns of choices reflect boundedly rational decision processes.
Direct lexical control of eye movements in Chinese reading: Evidence from the co-registration of EEG and eye tracking
The direct-lexical-control hypothesis stipulates that some aspect of a word's processing determines the duration of the fixation on that word and/or the next. Although the direct lexical control is incorporated into most current models of eye-movement control in reading, the precise implementation varies and the assumptions of the hypothesis may not be feasible given that lexical processing must occur rapidly enough to influence fixation durations. Conclusive empirical evidence supporting this hypothesis is therefore lacking. In this article, we report the results of an eye-tracking experiment using the boundary paradigm in which native speakers of Chinese read sentences in which target words were either high- or low-frequency and preceded by a valid or invalid preview. Eye movements were co-registered with electroencephalography, allowing standard analyses of eye-movement measures, divergence point analyses of fixation-duration distributions, and fixated-related potentials on the target words. These analyses collectively provide strong behavioral and neural evidence of early lexical processing and thus strong support for the direct-lexical-control hypothesis. We discuss the implications of the findings for our understanding of how the hypothesis might be implemented, the neural systems that support skilled reading, and the nature of eye-movement control in the reading of Chinese versus alphabetic scripts.
Recruitment of magnitude representations to understand graded words
Language understanding and mathematics understanding are two fundamental forms of human thinking. Prior research has largely focused on the question of how language shapes mathematical thinking. The current study considers the converse question. Specifically, it investigates whether the magnitude representations that are thought to anchor understanding of number are also recruited to understand the meanings of graded words. These are words that come in scales (e.g., Anger) whose members can be ordered by the degree to which they possess the defining property (e.g., calm, annoyed, angry, furious). Experiment 1 uses the comparison paradigm to find evidence that the distance, ratio, and boundary effects that are taken as evidence of the recruitment of magnitude representations extend from numbers to words. Experiment 2 uses a similarity rating paradigm and multi-dimensional scaling to find converging evidence for these effects in graded word understanding. Experiment 3 evaluates an alternative hypothesis - that these effects for graded words simply reflect the statistical structure of the linguistic environment - by using machine learning models of distributional word semantics: LSA, word2vec, GloVe, counterfitted word vectors, BERT, RoBERTa, and GPT-2. These models fail to show the full pattern of effects observed of humans in Experiment 2, suggesting that more is needed than mere statistics. This research paves the way for further investigations of the role of magnitude representations in sentence and text comprehension, and of the question of whether language understanding and number understanding draw on shared or independent magnitude representations. It also informs the role of machine learning models in cognitive psychology research.
Disentangling the roles of age and knowledge in early language acquisition: A fine-grained analysis of the vocabularies of infant and child language learners
The words that children learn change over time in predictable ways. The first words that infants acquire are generally ones that are both frequent and highly imageable. Older infants also learn words that are more abstract and some that are less common. It is unclear whether this pattern is attributable to maturational factors (i.e., younger children lack sufficiently developed cognitive faculties needed to learn abstract words) or linguistic factors (i.e., younger children lack sufficient knowledge of their language to use grammatical or contextual cues needed to figure out the meaning of more abstract words). The present study explores this question by comparing vocabulary acquisition in 53 preschool-aged children (M = 51 months, range = 30-76 months) who were adopted from China and Eastern Europe after two and half years of age and 53 vocabulary-matched infant controls born and raised in English speaking families in North America (M = 24 months, range = 16-33 months). Vocabulary was assessed using the MB-CDI Words and Sentences form, word frequency was estimated from the CHILDES database, and imageability was measured using adult ratings of how easily words could be pictured mentally. Both groups were more likely to know words that were both highly frequent and imageable (resulting in an over-additive interaction). Knowledge of a word was also independently affected by the syntactic category that it belongs to. Adopted preschoolers' vocabulary was slightly less affected by imageability. These findings were replicated in a comparison with a larger sample of vocabulary-matched controls drawn from the MB-CDI norming study (M = 22 months, range = 16-30 months; 33 girls). These results suggest that the patterns of acquisition in children's early vocabulary are primarily driven by the accrual of linguistic knowledge, but that vocabulary may also be affected by differences in early life experiences or conceptual knowledge.
Task imprinting: Another mechanism of representational change?
Research from several areas suggests that mental representations adapt to the specific tasks we carry out in our environment. In this study, we propose a mechanism of adaptive representational change, task imprinting. Thereby, we introduce a computational model, which portrays task imprinting as an adaptation to specific task goals via selective storage of helpful representations in long-term memory. We test the main qualitative prediction of the model in four behavioral experiments using healthy young adults as participants. In each experiment, we assess participants' baseline representations in the beginning of the experiment, then expose participants to one of two tasks intended to shape representations differently according to our model, and finally assess any potential change in representations. Crucially, the tasks used to measure representations differ in the amount that strategic, judgmental processes play a role. The results of Experiments 1 and 2 allow us to exclude the option that representations used in more perceptual tasks become biased categorically. The results of Experiment 4 make it likely that people strategically decide given the specific task context whether they use categorical information or not. One signature of representational change was however observed: category learning practice increased the perceptual sensitivity over and above mere exposure to the same stimuli.
How infants predict respect-based power
Research has shown that infants represent legitimate leadership and predict continued obedience to authority, but which cues they use to do so remains unknown. Across eight pre-registered experiments varying the cue provided, we tested if Norwegian 21-month-olds (N=128) expected three protagonists to obey a character even in her absence. We assessed whether bowing for the character, receiving a tribute from or conferring a benefit to the protagonists, imposing a cost on them (forcefully taking a resource or hitting them), or relative physical size were used as cues to generate the expectation of continued obedience that marks legitimate leadership. Whereas bowing sufficed in generating such an expectation, we found positive Bayesian evidence that all the other cues did not. Norwegian infants unlikely have witnessed bowing in their everyday life. Hence, bowing/prostration as cue for continued obedience may form part of an early-developing capacity to represent leadership built by evolution.
Repeated rock, paper, scissors play reveals limits in adaptive sequential behavior
How do people adapt to others in adversarial settings? Prior work has shown that people often violate rational models of adversarial decision-making in repeated interactions. In particular, in mixed strategy equilibrium (MSE) games, where optimal action selection entails choosing moves randomly, people often do not play randomly, but instead try to outwit their opponents. However, little is known about the adaptive reasoning that underlies these deviations from random behavior. Here, we examine strategic decision-making across repeated rounds of rock, paper, scissors, a well-known MSE game. In experiment 1, participants were paired with bot opponents that exhibited distinct stable move patterns, allowing us to identify the bounds of the complexity of opponent behavior that people can detect and adapt to. In experiment 2, bot opponents instead exploited stable patterns in the human participants' moves, providing a symmetrical bound on the complexity of patterns people can revise in their own behavior. Across both experiments, people exhibited a robust and flexible attention to transition patterns from one move to the next, exploiting these patterns in opponents and modifying them strategically in their own moves. However, their adaptive reasoning showed strong limitations with respect to more sophisticated patterns. Together, results provide a precise and consistent account of the surprisingly limited scope of people's adaptive decision-making in this setting.
Cognitive complexity explains processing asymmetry in judgments of similarity versus difference
Human judgments of similarity and difference are sometimes asymmetrical, with the former being more sensitive than the latter to relational overlap, but the theoretical basis for this asymmetry remains unclear. We test an explanation based on the type of information used to make these judgments (relations versus features) and the comparison process itself (similarity versus difference). We propose that asymmetries arise from two aspects of cognitive complexity that impact judgments of similarity and difference: processing relations between entities is more cognitively demanding than processing features of individual entities, and comparisons assessing difference are more cognitively complex than those assessing similarity. In Experiment 1 we tested this hypothesis for both verbal comparisons between word pairs, and visual comparisons between sets of geometric shapes. Participants were asked to select one of two options that was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard; on ambiguous trials, one option was more featurally similar to the standard, whereas the other was more relationally similar. Given the higher cognitive complexity of processing relations and of assessing difference, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was replicated using more complex story stimuli (Experiment 2). We showed that this pattern can be captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments.
The fusion point of temporal binding: Promises and perils of multisensory accounts
Performing an action to initiate a consequence in the environment triggers the perceptual illusion of temporal binding. This phenomenon entails that actions and following effects are perceived to occur closer in time than they do outside the action-effect relationship. Here we ask whether temporal binding can be explained in terms of multisensory integration, by assuming either multisensory fusion or partial integration of the two events. We gathered two datasets featuring a wide range of action-effect delays as a key factor influencing integration. We then tested the fit of a computational model for multisensory integration, the statistically optimal cue integration (SOCI) model. Indeed, qualitative aspects of the data on a group-level followed the principles of a multisensory account. By contrast, quantitative evidence from a comprehensive model evaluation indicated that temporal binding cannot be reduced to multisensory integration. Rather, multisensory integration should be seen as one of several component processes underlying temporal binding on an individual level.
The structure and development of explore-exploit decision making
A critical component of human learning reflects the balance people must achieve between focusing on the utility of what they know versus openness to what they have yet to experience. How individuals decide whether to explore new options versus exploit known options has garnered growing interest in recent years. Yet, the component processes underlying decisions to explore and whether these processes change across development remain poorly understood. By contrasting a variety of tasks that measure exploration in slightly different ways, we found that decisions about whether to explore reflect (a) random exploration that is not explicitly goal-directed and (b) directed exploration to purposefully reduce uncertainty. While these components similarly characterized the decision-making of both youth and adults, younger participants made decisions that were less strategic, but more exploratory and flexible, than those of adults. These findings are discussed in terms of how people adapt to and learn from changing environments over time.Data has been made available in the Open Science Foundation platform (osf.io).
Optimizing competence in the service of collaboration
In order to efficiently divide labor with others, it is important to understand what our collaborators can do (i.e., their competence). However, competence is not static-people get better at particular jobs the more often they perform them. This plasticity of competence creates a challenge for collaboration: For example, is it better to assign tasks to whoever is most competent now, or to the person who can be trained most efficiently "on-the-job"? We conducted four experiments (N=396) that examine how people make decisions about whom to train (Experiments 1 and 3) and whom to recruit (Experiments 2 and 4) to a collaborative task, based on the simulated collaborators' starting expertise, the training opportunities available, and the goal of the task. We found that participants' decisions were best captured by a planning model that attempts to maximize the returns from collaboration while minimizing the costs of hiring and training individual collaborators. This planning model outperformed alternative models that based these decisions on the agents' current competence, or on how much agents stood to improve in a single training step, without considering whether this training would enable agents to succeed at the task in the long run. Our findings suggest that people do not recruit and train collaborators based solely on their current competence, nor solely on the opportunities for their collaborators to improve. Instead, people use an intuitive theory of competence to balance the costs of hiring and training others against the benefits to the collaboration.
Infants can use temporary or scant categorical information to individuate objects
In a standard individuation task, infants see two different objects emerge in alternation from behind a screen. If they can assign distinct categorical descriptors to the two objects, they expect to see both objects when the screen is lowered; if not, they have no expectation at all about what they will see (i.e., two objects, one object, or no object). Why is contrastive categorical information critical for success at this task? According to the kind account, infants must decide whether they are facing a single object with changing properties or two different objects with stable properties, and access to permanent, intrinsic, kind information for each object resolves this difficulty. According to the two-system account, however, contrastive categorical descriptors simply provide the object-file system with unique tags for individuating the two objects and for communicating about them with the physical-reasoning system. The two-system account thus predicts that any type of contrastive categorical information, however temporary or scant it may be, should induce success at the task. Two experiments examined this prediction. Experiment 1 tested 14-month-olds (N = 96) in a standard task using two objects that differed only in their featural properties. Infants succeeded at the task when the object-file system had access to contrastive temporary categorical descriptors derived from the objects' distinct causal roles in preceding support events (e.g., formerly a support, formerly a supportee). Experiment 2 tested 9-month-olds (N = 96) in a standard task using two objects infants this age typically encode as merely featurally distinct. Infants succeeded when the object-file system had access to scant categorical descriptors derived from the objects' prior inclusion in static arrays of similarly shaped objects (e.g., block-shaped objects, cylinder-shaped objects). These and control results support the two-system account's claim that in a standard task, contrastive categorical descriptors serve to provide the object-file system with unique tags for the two objects.
The perceptual timescape: Perceptual history on the sub-second scale
There is a high-capacity store of brief time span (∼1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of milliseconds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration.