Perceptual inference corrects function word errors in reading: Errors that are not noticed do not disrupt eye movements
Both everyday experience and laboratory research demonstrate that readers often fail to notice errors such as an omitted or repeated function word. This phenomenon challenges central tenets of reading and sentence processing models, according to which each word is lexically processed and incrementally integrated into a syntactic representation. One solution would propose that apparent failure to notice such errors reflects post-perceptual inference; the reader does initially perceive the error, but then unconsciously 'corrects' the perceived string. Such a post-perceptual account predicts that when readers fail to explicitly notice an error, the error will nevertheless disrupt reading, at least fleetingly. We present a large-scale eyetracking experiment investigating whether disruption is detectable in the eye movement record when readers fail to notice an omitted or repeated two-letter function word in naturalistic sentences. Readers failed to notice both omission and repetition errors over 36% of the time. In an analysis that included all trials, both omission and repetition resulted in pronounced eye movement disruption, compared to reading of grammatical control sentences. But in an analysis including only trials on which readers failed to notice the errors, neither type of error disrupted eye movements on any measure. Indeed, there was evidence in some measures that reading was relatively fast on the trials on which errors were missed. It does not appear that when an error is not consciously noticed, it is initially perceived, and then later corrected; rather, linguistic knowledge influences what the reader perceives.
Doing things efficiently: Testing an account of why simple explanations are satisfying
People often find simple explanations more satisfying than complex ones. Across seven preregistered experiments, we provide evidence that this simplicity preference is not specific to explanations and may instead arises from a broader tendency to prefer completing goals in efficient ways. In each experiment, participants (total N=2820) learned of simple and complex methods for producing an outcome, and judged which was more appealing-either as an explanation why the outcome happened, or as a process for producing it. Participants showed similar preferences across judgments. They preferred simple methods as explanations and processes in tasks with no statistical information about the reliability or pervasiveness of causal elements. But when this statistical information was provided, preferences for simple causes often diminished and reversed in both kinds of judgments. Together, these findings suggest that people may assess explanations much in the same ways they assess methods for completing goals, and that both kinds of judgments depend on the same cognitive mechanisms.
Free time, sharper mind: A computational dive into working memory improvement
Extra free time improves working memory (WM) performance. This free-time benefit becomes larger across successive serial positions, a phenomenon recently labeled the "fanning-out effect". Different mechanisms can account for this phenomenon. In this study, we implemented these mechanisms computationally and tested them experimentally. We ran three experiments that varied the time people were allowed to encode items, as well as the order in which they recalled them. Experiment 1 manipulated the free-time benefit in a paradigm in which people recalled items either in forward or backward order. Experiment 2 used the same forward-backward recall paradigm coupled with a distractor task at the end of encoding. Experiment 3 used a cued recall paradigm in which items were tested in random order. In all three experiments, the best-fitting model of the free-time benefit included (1) a consolidation mechanism whereby a just-encoded item continues to be re-encoded as a function of the total free-time available and (2) a stabilization mechanism whereby items become more resistant to output interference with extra free time. Mechanisms such as decay and refreshing, as well as models based on the replenishment of encoding-resources, were not supported by our data.
Building compressed causal models of the world
A given causal system can be represented in a variety of ways. How do agents determine which variables to include in their causal representations, and at what level of granularity? Using techniques from Bayesian networks, information theory, and decision theory, we develop a formal theory according to which causal representations reflect a trade-off between compression and informativeness, where the optimal trade-off depends on the decision-theoretic value of information for a given agent in a given context. This theory predicts that, all else being equal, agents prefer causal models that are as compressed as possible. When compression is associated with information loss, however, all else is not equal, and our theory predicts that agents will favor compressed models only when the information they sacrifice is not informative with respect to the agent's anticipated decisions. We then show, across six studies reported here (N=2,364) and one study reported in the supplemental materials (N=182), that participants' preferences over causal models are in keeping with the predictions of our theory. Our theory offers a unification of different dimensions of causal evaluation identified within the philosophy of science (proportionality and stability), and contributes to a more general picture of human cognition according to which the capacity to create compressed (causal) representations plays a central role.
Recruitment of magnitude representations to understand graded words
Language understanding and mathematics understanding are two fundamental forms of human thinking. Prior research has largely focused on the question of how language shapes mathematical thinking. The current study considers the converse question. Specifically, it investigates whether the magnitude representations that are thought to anchor understanding of number are also recruited to understand the meanings of graded words. These are words that come in scales (e.g., Anger) whose members can be ordered by the degree to which they possess the defining property (e.g., calm, annoyed, angry, furious). Experiment 1 uses the comparison paradigm to find evidence that the distance, ratio, and boundary effects that are taken as evidence of the recruitment of magnitude representations extend from numbers to words. Experiment 2 uses a similarity rating paradigm and multi-dimensional scaling to find converging evidence for these effects in graded word understanding. Experiment 3 evaluates an alternative hypothesis - that these effects for graded words simply reflect the statistical structure of the linguistic environment - by using machine learning models of distributional word semantics: LSA, word2vec, GloVe, counterfitted word vectors, BERT, RoBERTa, and GPT-2. These models fail to show the full pattern of effects observed of humans in Experiment 2, suggesting that more is needed than mere statistics. This research paves the way for further investigations of the role of magnitude representations in sentence and text comprehension, and of the question of whether language understanding and number understanding draw on shared or independent magnitude representations. It also informs the role of machine learning models in cognitive psychology research.
Disentangling the roles of age and knowledge in early language acquisition: A fine-grained analysis of the vocabularies of infant and child language learners
The words that children learn change over time in predictable ways. The first words that infants acquire are generally ones that are both frequent and highly imageable. Older infants also learn words that are more abstract and some that are less common. It is unclear whether this pattern is attributable to maturational factors (i.e., younger children lack sufficiently developed cognitive faculties needed to learn abstract words) or linguistic factors (i.e., younger children lack sufficient knowledge of their language to use grammatical or contextual cues needed to figure out the meaning of more abstract words). The present study explores this question by comparing vocabulary acquisition in 53 preschool-aged children (M = 51 months, range = 30-76 months) who were adopted from China and Eastern Europe after two and half years of age and 53 vocabulary-matched infant controls born and raised in English speaking families in North America (M = 24 months, range = 16-33 months). Vocabulary was assessed using the MB-CDI Words and Sentences form, word frequency was estimated from the CHILDES database, and imageability was measured using adult ratings of how easily words could be pictured mentally. Both groups were more likely to know words that were both highly frequent and imageable (resulting in an over-additive interaction). Knowledge of a word was also independently affected by the syntactic category that it belongs to. Adopted preschoolers' vocabulary was slightly less affected by imageability. These findings were replicated in a comparison with a larger sample of vocabulary-matched controls drawn from the MB-CDI norming study (M = 22 months, range = 16-30 months; 33 girls). These results suggest that the patterns of acquisition in children's early vocabulary are primarily driven by the accrual of linguistic knowledge, but that vocabulary may also be affected by differences in early life experiences or conceptual knowledge.
Ethical choice reversals
Understanding the systematic ways that human decision making departs from normative principles has been important in the development of cognitive theory across multiple decision domains. We focus here on whether such seemingly "irrational" decisions occur in ethical decisions that impose difficult tradeoffs between the welfare and interests of different individuals or groups. Across three sets of experiments and in multiple decision scenarios, we provide clear evidence that contextual choice reversals arise in multiples types of ethical choice settings, in just the way that they do in other domains ranging from economic gambles to perceptual judgments (Trueblood et al., 2013; Wedell, 1991). Specifically, we find within-participant evidence for attraction effects in which choices between two options systematically vary as a function of features of a third dominated and unchosen option-a prima facie violation of rational choice axioms that demand consistency. Unlike economic gambles and most domains in which such effects have been studied, many of our ethical scenarios involve features that are not presented numerically, and features for which there is no clear majority-endorsed ranking. We provide empirical evidence and a novel modeling analysis based on individual differences of feature rankings within attributes to show that such individual variations partly explains observed variation in the attraction effects. We conclude by discussing how recent computational analyses of attraction effects may provide a basis for understanding how the observed patterns of choices reflect boundedly rational decision processes.
Direct lexical control of eye movements in Chinese reading: Evidence from the co-registration of EEG and eye tracking
The direct-lexical-control hypothesis stipulates that some aspect of a word's processing determines the duration of the fixation on that word and/or the next. Although the direct lexical control is incorporated into most current models of eye-movement control in reading, the precise implementation varies and the assumptions of the hypothesis may not be feasible given that lexical processing must occur rapidly enough to influence fixation durations. Conclusive empirical evidence supporting this hypothesis is therefore lacking. In this article, we report the results of an eye-tracking experiment using the boundary paradigm in which native speakers of Chinese read sentences in which target words were either high- or low-frequency and preceded by a valid or invalid preview. Eye movements were co-registered with electroencephalography, allowing standard analyses of eye-movement measures, divergence point analyses of fixation-duration distributions, and fixated-related potentials on the target words. These analyses collectively provide strong behavioral and neural evidence of early lexical processing and thus strong support for the direct-lexical-control hypothesis. We discuss the implications of the findings for our understanding of how the hypothesis might be implemented, the neural systems that support skilled reading, and the nature of eye-movement control in the reading of Chinese versus alphabetic scripts.
Task imprinting: Another mechanism of representational change?
Research from several areas suggests that mental representations adapt to the specific tasks we carry out in our environment. In this study, we propose a mechanism of adaptive representational change, task imprinting. Thereby, we introduce a computational model, which portrays task imprinting as an adaptation to specific task goals via selective storage of helpful representations in long-term memory. We test the main qualitative prediction of the model in four behavioral experiments using healthy young adults as participants. In each experiment, we assess participants' baseline representations in the beginning of the experiment, then expose participants to one of two tasks intended to shape representations differently according to our model, and finally assess any potential change in representations. Crucially, the tasks used to measure representations differ in the amount that strategic, judgmental processes play a role. The results of Experiments 1 and 2 allow us to exclude the option that representations used in more perceptual tasks become biased categorically. The results of Experiment 4 make it likely that people strategically decide given the specific task context whether they use categorical information or not. One signature of representational change was however observed: category learning practice increased the perceptual sensitivity over and above mere exposure to the same stimuli.
How infants predict respect-based power
Research has shown that infants represent legitimate leadership and predict continued obedience to authority, but which cues they use to do so remains unknown. Across eight pre-registered experiments varying the cue provided, we tested if Norwegian 21-month-olds (N=128) expected three protagonists to obey a character even in her absence. We assessed whether bowing for the character, receiving a tribute from or conferring a benefit to the protagonists, imposing a cost on them (forcefully taking a resource or hitting them), or relative physical size were used as cues to generate the expectation of continued obedience that marks legitimate leadership. Whereas bowing sufficed in generating such an expectation, we found positive Bayesian evidence that all the other cues did not. Norwegian infants unlikely have witnessed bowing in their everyday life. Hence, bowing/prostration as cue for continued obedience may form part of an early-developing capacity to represent leadership built by evolution.
The fusion point of temporal binding: Promises and perils of multisensory accounts
Performing an action to initiate a consequence in the environment triggers the perceptual illusion of temporal binding. This phenomenon entails that actions and following effects are perceived to occur closer in time than they do outside the action-effect relationship. Here we ask whether temporal binding can be explained in terms of multisensory integration, by assuming either multisensory fusion or partial integration of the two events. We gathered two datasets featuring a wide range of action-effect delays as a key factor influencing integration. We then tested the fit of a computational model for multisensory integration, the statistically optimal cue integration (SOCI) model. Indeed, qualitative aspects of the data on a group-level followed the principles of a multisensory account. By contrast, quantitative evidence from a comprehensive model evaluation indicated that temporal binding cannot be reduced to multisensory integration. Rather, multisensory integration should be seen as one of several component processes underlying temporal binding on an individual level.
Repeated rock, paper, scissors play reveals limits in adaptive sequential behavior
How do people adapt to others in adversarial settings? Prior work has shown that people often violate rational models of adversarial decision-making in repeated interactions. In particular, in mixed strategy equilibrium (MSE) games, where optimal action selection entails choosing moves randomly, people often do not play randomly, but instead try to outwit their opponents. However, little is known about the adaptive reasoning that underlies these deviations from random behavior. Here, we examine strategic decision-making across repeated rounds of rock, paper, scissors, a well-known MSE game. In experiment 1, participants were paired with bot opponents that exhibited distinct stable move patterns, allowing us to identify the bounds of the complexity of opponent behavior that people can detect and adapt to. In experiment 2, bot opponents instead exploited stable patterns in the human participants' moves, providing a symmetrical bound on the complexity of patterns people can revise in their own behavior. Across both experiments, people exhibited a robust and flexible attention to transition patterns from one move to the next, exploiting these patterns in opponents and modifying them strategically in their own moves. However, their adaptive reasoning showed strong limitations with respect to more sophisticated patterns. Together, results provide a precise and consistent account of the surprisingly limited scope of people's adaptive decision-making in this setting.
Cognitive complexity explains processing asymmetry in judgments of similarity versus difference
Human judgments of similarity and difference are sometimes asymmetrical, with the former being more sensitive than the latter to relational overlap, but the theoretical basis for this asymmetry remains unclear. We test an explanation based on the type of information used to make these judgments (relations versus features) and the comparison process itself (similarity versus difference). We propose that asymmetries arise from two aspects of cognitive complexity that impact judgments of similarity and difference: processing relations between entities is more cognitively demanding than processing features of individual entities, and comparisons assessing difference are more cognitively complex than those assessing similarity. In Experiment 1 we tested this hypothesis for both verbal comparisons between word pairs, and visual comparisons between sets of geometric shapes. Participants were asked to select one of two options that was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard; on ambiguous trials, one option was more featurally similar to the standard, whereas the other was more relationally similar. Given the higher cognitive complexity of processing relations and of assessing difference, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was replicated using more complex story stimuli (Experiment 2). We showed that this pattern can be captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments.
The structure and development of explore-exploit decision making
A critical component of human learning reflects the balance people must achieve between focusing on the utility of what they know versus openness to what they have yet to experience. How individuals decide whether to explore new options versus exploit known options has garnered growing interest in recent years. Yet, the component processes underlying decisions to explore and whether these processes change across development remain poorly understood. By contrasting a variety of tasks that measure exploration in slightly different ways, we found that decisions about whether to explore reflect (a) random exploration that is not explicitly goal-directed and (b) directed exploration to purposefully reduce uncertainty. While these components similarly characterized the decision-making of both youth and adults, younger participants made decisions that were less strategic, but more exploratory and flexible, than those of adults. These findings are discussed in terms of how people adapt to and learn from changing environments over time.Data has been made available in the Open Science Foundation platform (osf.io).
Optimizing competence in the service of collaboration
In order to efficiently divide labor with others, it is important to understand what our collaborators can do (i.e., their competence). However, competence is not static-people get better at particular jobs the more often they perform them. This plasticity of competence creates a challenge for collaboration: For example, is it better to assign tasks to whoever is most competent now, or to the person who can be trained most efficiently "on-the-job"? We conducted four experiments (N=396) that examine how people make decisions about whom to train (Experiments 1 and 3) and whom to recruit (Experiments 2 and 4) to a collaborative task, based on the simulated collaborators' starting expertise, the training opportunities available, and the goal of the task. We found that participants' decisions were best captured by a planning model that attempts to maximize the returns from collaboration while minimizing the costs of hiring and training individual collaborators. This planning model outperformed alternative models that based these decisions on the agents' current competence, or on how much agents stood to improve in a single training step, without considering whether this training would enable agents to succeed at the task in the long run. Our findings suggest that people do not recruit and train collaborators based solely on their current competence, nor solely on the opportunities for their collaborators to improve. Instead, people use an intuitive theory of competence to balance the costs of hiring and training others against the benefits to the collaboration.
A unified account of simple and response-selective inhibition
Response inhibition is a key attribute of human executive control. Standard stop-signal tasks require countermanding a single response; the speed at which that response can be inhibited indexes the efficacy of the inhibitory control networks. However, more complex stopping tasks, where one or more components of a multi-component action are cancelled (i.e., response-selective stopping) cannot be explained by the independent-race model appropriate for the simple task (Logan and Cowan 1984). Healthy human participants (n=28; 10 male; 19-40 years) completed a response-selective stopping task where a 'go' stimulus required simultaneous (bimanual) button presses in response to left and right pointing green arrows. On a subset of trials (30%) one, or both, arrows turned red (constituting the stop signal) requiring that only the button-press(es) associated with red arrows be cancelled. Electromyographic recordings from both index fingers (first dorsal interosseous) permitted the assessment of both voluntary motor responses that resulted in overt button presses, and activity that was cancelled prior to an overt response (i.e., partial, or covert, responses). We propose a simultaneously inhibit and start (SIS) model that extends the independent race model and provides a highly accurate account of response-selective stopping data. Together with fine-grained EMG analysis, our model-based analysis offers converging evidence that the selective-stop signal simultaneously triggers a process that stops the bimanual response and triggers a new unimanual response corresponding to the green arrow. Our results require a reconceptualisation of response-selective stopping and offer a tractable framework for assessing such tasks in healthy and patient populations. Significance Statement Response inhibition is a key attribute of human executive control, frequently investigated using the stop-signal task. After initiating a motor response to a go signal, a stop signal occasionally appears at a delay, requiring cancellation of the response. This has been conceptualised as a 'race' between the go and stop processes, with the successful (or failed) cancellation determined by which process wins the race. Here we provide a novel computational model for a complex variation of the stop-signal task, where only one component of a multicomponent action needs to be cancelled. We provide compelling muscle activation data that support our model, providing a robust and plausible framework for studying these complex inhibition tasks in both healthy and pathological cohorts.
Dual-process modeling of sequential decision making in the balloon analogue risk task
People are often faced with repeated risky decisions that involve uncertainty. In sequential risk-taking tasks, like the Balloon Analogue Risk Task (BART), the underlying decision process is not yet fully understood. Dual-process theory proposes that human cognition involves two main families of processes, often referred to as System 1 (fast and automatic) and System 2 (slow and conscious). We cross models of the BART with different architectures of the two systems to yield a pool of computational dual-process models that are evaluated on multiple performance measures (e.g., parameter identifiability, model recovery, and predictive accuracy). Results show that the best-performing model configuration assumes the two systems are competitively connected, an evaluation process based on the Scaled Target Learning model of the BART, and an assessment rate that incorporates sensitivity to the trial number, pumping opportunity, and bias to engage in System 1. Findings also shed light on how modeling choices and response times in a dual-process framework can benefit our understanding of sequential risk-taking behavior.
Anaphoric distance dependencies in visual narrative structure and processing
Linguistic syntax has often been claimed as uniquely complex due to features like anaphoric relations and distance dependencies. However, visual narratives of sequential images, like those in comics, have been argued to use sequencing mechanisms analogous to those in language. These narrative structures include "refiner" panels that "zoom in" on the contents of another panel. Similar to anaphora in language, refiners indexically connect inexplicit referential information in one unit (refiner, pronoun) to a more informative "antecedent" elsewhere in the discourse. Also like in language, refiners can follow their antecedents (anaphoric) or precede them (cataphoric), along with having either proximal or distant connections. We here explore the constraints on visual narrative refiners created by modulating these features of order and distance. Experiment 1 examined participants' preferences for where refiners are placed in a sequence using a force-choice test, which revealed that refiners are preferred to follow their antecedents and have proximal distances from them. Experiment 2 then showed that distance dependencies lead to slower self-paced viewing times. Finally, measurements of event-related brain potentials (ERPs) in Experiment 3 revealed that these patterns evoke similar brain responses as referential dependencies in language (i.e., N400, LAN, Nref). Across all three studies, the constraints and (neuro)cognitive responses to refiners parallel those shown to anaphora in language, suggesting domain-general constraints on the sequencing of referential dependencies.
No position-specific interference from prior lists in cued recognition: A challenge for position coding (and other) theories of serial memory
Position-specific intrusions of items from prior lists are rare but important phenomena that distinguish broad classes of theory in serial memory. They are uniquely predicted by position coding theories, which assume items on all lists are associated with the same set of codes representing their positions. Activating a position code activates items associated with it in current and prior lists in proportion to their distance from the activated position. Thus, prior list intrusions are most likely to come from the coded position. Alternative "item dependent" theories based on associations between items and contexts built from items have difficulty accounting for the position specificity of prior list intrusions. We tested the position coding account with a position-cued recognition task designed to produce prior list interference. Cuing a position should activate a position code, which should activate items in nearby positions in the current and prior lists. We presented lures from the prior list to test for position-specific activation in response time and error rate; lures from nearby positions should interfere more. We found no evidence for such interference in 10 experiments, falsifying the position coding prediction. We ran two serial recall experiments with the same materials and found position-specific prior list intrusions. These results challenge all theories of serial memory: Position coding theories can explain the prior list intrusions in serial recall and but not the absence of prior list interference in cued recognition. Item dependent theories can explain the absence of prior list interference in cued recognition but cannot explain the occurrence of prior list intrusions in serial recall.
What's in a sample? Epistemic uncertainty and metacognitive awareness in risk taking
In a fundamentally uncertain world, sound information processing is a prerequisite for effective behavior. Given that information processing is subject to inevitable cognitive imprecision, decision makers should adapt to this imprecision and to the resulting epistemic uncertainty when taking risks. We tested this metacognitive ability in two experiments in which participants estimated the expected value of different number distributions from sequential samples and then bet on their own estimation accuracy. Results show that estimates were imprecise, and this imprecision increased with higher distributional standard deviations. Importantly, participants adapted their risk-taking behavior to this imprecision and hence deviated from the predictions of Bayesian models of uncertainty that assume perfect integration of information. To explain these results, we developed a computational model that combines Bayesian updating with a metacognitive awareness of cognitive imprecision in the integration of information. Modeling results were robust to the inclusion of an empirical measure of participants' perceived variability. In sum, we show that cognitive imprecision is crucial to understanding risk taking in decisions from experience. The results further demonstrate the importance of metacognitive awareness as a cognitive building block for adaptive behavior under (partial) uncertainty.
Infants can use temporary or scant categorical information to individuate objects
In a standard individuation task, infants see two different objects emerge in alternation from behind a screen. If they can assign distinct categorical descriptors to the two objects, they expect to see both objects when the screen is lowered; if not, they have no expectation at all about what they will see (i.e., two objects, one object, or no object). Why is contrastive categorical information critical for success at this task? According to the kind account, infants must decide whether they are facing a single object with changing properties or two different objects with stable properties, and access to permanent, intrinsic, kind information for each object resolves this difficulty. According to the two-system account, however, contrastive categorical descriptors simply provide the object-file system with unique tags for individuating the two objects and for communicating about them with the physical-reasoning system. The two-system account thus predicts that any type of contrastive categorical information, however temporary or scant it may be, should induce success at the task. Two experiments examined this prediction. Experiment 1 tested 14-month-olds (N = 96) in a standard task using two objects that differed only in their featural properties. Infants succeeded at the task when the object-file system had access to contrastive temporary categorical descriptors derived from the objects' distinct causal roles in preceding support events (e.g., formerly a support, formerly a supportee). Experiment 2 tested 9-month-olds (N = 96) in a standard task using two objects infants this age typically encode as merely featurally distinct. Infants succeeded when the object-file system had access to scant categorical descriptors derived from the objects' prior inclusion in static arrays of similarly shaped objects (e.g., block-shaped objects, cylinder-shaped objects). These and control results support the two-system account's claim that in a standard task, contrastive categorical descriptors serve to provide the object-file system with unique tags for the two objects.
Retrieving effectively from source memory: Evidence for differentiation and local matching processes
The ability to distinguish between different explanations of human memory abilities continues to be the subject of many ongoing theoretical debates. These debates attempt to account for a growing corpus of empirical phenomena in item-memory judgments, which include the list strength effect, the strength-based mirror effect, and output interference. One of the main theoretical contenders is the Retrieving Effectively from Memory (REM) model. We show that REM, in its current form, has difficulties in accounting for source-memory judgments - a situation that calls for its revision. We propose an extended REM model that assumes a local-matching process for source judgments alongside source differentiation. We report a first evaluation of this model's predictions using three experiments in which we manipulated the relative source-memory strength of different lists of items. Analogous to item-memory judgments, we observed a null list strength effect and a strength-based mirror effect in the case of source memory. In a second evaluation, which relied on a novel experiment alongside two previously published datasets, we evaluated the model's predictions regarding the manifestation of output interference in item and lack of it in source memory judgments. Our results showed output interference severely affecting the accuracy of item-memory judgments but having a null or negligible impact when it comes to source-memory judgments. Altogether, these results support REM's core notion of differentiation (for both item and source information) as well as the concept of local matching proposed by the present extension.
The perceptual timescape: Perceptual history on the sub-second scale
There is a high-capacity store of brief time span (∼1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of milliseconds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration.
Interactive structure building in sentence production
How speakers sequence words and phrases remains a central question in cognitive psychology. Here we focused on understanding the representations and processes that underlie structural priming, the speaker's tendency to repeat sentence structures encountered earlier. Verb repetition from the prime to the target led to a stronger tendency to produce locative variants of the spray-load alternation following locative primes (e.g., load the boxes into the van) than following with primes (e.g., load the van with the boxes). These structural variants had the same constituent structure, ruling out abstract syntactic structure as the source of the verb boost effect. Furthermore, using cleft constructions (e.g., What the assistant loaded into the lift was the equipment), we found that the thematic role order (thematic role-position mappings) of the prime can persist separately from its argument structure (thematic role-syntactic function mappings). Moreover, both priming effects were enhanced by verb repetition and interacted with each other when the construction of the prime was also repeated in the target. These findings are incompatible with the traditional staged model of grammatical encoding, which postulates the independence of abstract syntax from thematic role information. We propose the interactive structure-building account, according to which speakers build a sentence structure by choosing a thematic role order and argument structure interactively based on their prior co-occurrence together with other structurally relevant information such as verbs and constructions.
The impact of cognitive resource constraints on goal prioritization
Many decisions we face daily entail deliberation about how to coordinate resources shared between multiple, competing goals. When time permits, people appear to approach these goal prioritization problems by analytically considering all goal-relevant information to arrive at a prioritization decision. However, it is not yet clear if this normative strategy extends to situations characterized by resource constraints such as when deliberation time is scarce or cognitive load is high. We evaluated the questions of how limited deliberation time and cognitive load affect goal prioritization decisions across a series of experiments using a gamified experimental task, which required participants to make a series of interdependent goal prioritization decisions. We fit several candidate models to experimental data to identify decision strategy adaptations at the individual subject-level. Results indicated that participants tended to opt for a simple heuristic strategy when cognitive resources were constrained rather than making a general tradeoff between speed and accuracy (e.g., the type of tradeoff that would be predicted by evidence accumulation models). The most common heuristic strategy involved disproportionately weighing information about goal deadlines compared to other goal-relevant information such as the goal's difficulty and the goal's subjective value.
Modelling orthographic similarity effects in recognition memory reveals support for open bigram representations of letter coding
A variety of letter string representations has been proposed in the reading literature to account for empirically established orthographic similarity effects from masked priming studies. However, these similarity effects have not been explored in episodic memory paradigms and very few memory models have employed orthographic representation of words. In the current work, through two recognition memory experiments employing word and pseudoword stimuli respectively, we empirically established a set of key orthographic similarity effects for the first time in recognition memory - namely the substitution effect, transposition effect and reverse effect in recognition memory of words and pseudowords, and a start-letter importance in recognition memory of words. Subsequently, we compared orthographic representations from the reading literature including slot coding, closed-bigram, open-bigram and the overlap model. Each of these representations was situated in a global matching model and fitted to recognition performance via Luce's choice rule in a hierarchical Bayesian framework. Model selection results showed support for the open-bigram representation in both experiments.
Learning dimensions of meaning: Children's acquisition of but
Connectives such as but are critical for building coherent discourse. They also express meanings that do not fit neatly into the standard distinction between semantics and implicated pragmatics. How do children acquire them? Corpus analyses indicate that children use these words in a sophisticated way by the early pre-school years, but a small number of experimental studies also suggest that children do not understand that but has a contrastive meaning until they reach school age. In a series of eight experiments we tested children's understanding of contrastive but compared to the causal connective so, by using a word learning paradigm (e.g., It was a warm day but/so Katy put on a pagle). When the connective so was used, we found that even 2-year-olds inferred a novel word meaning that was associated with the sentence context (a t-shirt). However, for the connective but, children did not infer a non-associated contrastive meaning (a winter coat) until age 7. Before that, even 5-year-old children reliably inferred an associated referent, indicating that they failed to correctly assign but a contrastive meaning. Five control experiments ruled out explanations for this pattern based on basic task demands, sentence processing skills or difficulty making adult-like inferences. A sixth experiment reports one particular context in which five-year-olds do interpret but contrastively. However, that same context also leads children to interpret so contrastively. We conclude that children's sophisticated production of connectives like but and so masks a major difficulty learning their meanings. We suggest that discourse connectives incorporate a class of words whose usage is easy to mimic, but whose meanings are difficult to acquire from everyday conversations, with implications for theories of word learning and discourse processing.
Modeling the continuous recognition paradigm to determine how retrieval can impact subsequent retrievals
There are several ways in which retrieval during a memory test can harm memory: (a) retrieval can cause an increase in interference due to the storage of additional information (i.e., item-noise); (b) retrieval can decrease accessibility to studied items due to context drift; and (c) retrieval can result in a trade in accuracy for speed as testing progresses. While these mechanisms produce similar outcomes in a study-test paradigm, they are dissociated in the 'continuous' recognition paradigm, where items are presented continuously and a participant's task is to detect a repeat of an item. In this paradigm, context drift results in worse performance with increasing study-test lag (the lag effect), whereas increasing item-noise is evident in a decrease in performance for later test trials in the sequence (the test position effect [TPE]). In the present investigation, we measured the influences of item-noise, context drift, and decision-related factors in a novel continuous recognition dataset using variants of the Osth et al. (2018) global matching model. We fit both choice and response times at the single trial level using state-of-the-art hierarchical Bayesian methods while incorporating crucial amendments to the modeling framework, including multiple context scales and sequential effects. We found that item-noise was responsible for producing the TPE, context drift decreased the magnitude of the TPE (by diminishing the impact of item-noise), and speed-accuracy changes had some minor effects that varied across participants.
Risky decisions are influenced by individual attributes as a function of risk preference
It has long been assumed in economic theory that multi-attribute decisions involving several attributes or dimensions - such as probabilities and amounts of money to be earned during risky choices - are resolved by first combining the attributes of each option to form an overall expected value and then comparing the expected values of the alternative options, using a unique evidence accumulation process. A plausible alternative would be performing independent comparisons between the individual attributes and then integrating the results of the comparisons afterwards. Here, we devise a novel method to disambiguate between these types of models, by orthogonally manipulating the expected value of choice options and the relative salience of their attributes. Our results, based on behavioral measures and drift-diffusion models, provide evidence in favor of the framework where information about individual attributes independently impacts deliberation. This suggests that risky decisions are resolved by running in parallel multiple comparisons between the separate attributes - possibly alongside an additional comparison of expected value. This result stands in contrast with the assumption of standard economic theory that choices require a unique comparison of expected values and suggests that at the cognitive level, decision processes might be more distributed than commonly assumed. Beyond our planned analyses, we also discovered that attribute salience affects people of different risk preference type in different ways: risk-averse participants seem to focus more on probability, except when monetary amount is particularly high; risk-neutral/seeking participants, in contrast, seem to focus more on monetary amount, except when probability is particularly low.
Evidence accumulation is not essential for generating intertemporal preference: A comparison of dynamic cognitive models of matching tasks
Intertemporal preference has been investigated mainly with a choice paradigm. However, a matching paradigm might be more informative for a proper inference about intertemporal preference and a deep understanding of the underlying cognitive mechanisms. This research involved two empirical studies using the matching paradigm and compared various corresponding dynamic models. These models were developed under either the framework of decision field theory, an exemplar theory assuming evidence accumulation, or a non-evidence-accumulation framework built upon the well-established notions of random utility and discrimination threshold (i.e., the RUDT framework). Most of these models were alternative-based whereas the others were attribute-based ones. Participants in Study 1 were required to fill in the amount of an immediate stimulus to make it as attractive as a delayed stimulus, whereas those in Study 2 needed to accomplish a more general matching task in which either the payoff amount or delay length of one stimulus was missing. Consistent behavioral regularities regarding both matching values and response times were revealed in these studies. The results of model comparison favored in general the RUDT framework as well as an attribute-based perspective on intertemporal preference. In addition, the predicted matching values and response times of the best RUDT model were also highly correlated with the observed data and replicated most observed behavioral regularities. Together, this research and previous modeling work on intertemporal choice suggest that evidence accumulation is not essential for generating intertemporal preference. Future research should examine the validity of the new framework in other preferential decisions for a more stringent test of the framework.