Generating distant analogies increases metaphor production
Although a large body of work has explored the mechanisms underlying metaphor comprehension, less research has focused on spontaneous metaphor production. Previous research suggests that reasoning about analogies can induce a relational mindset, which causes a greater focus on underlying abstract similarities. We explored how inducing a relational mindset may increase the tendency to use metaphors to describe topics. Participants first solved a series of either cross-domain (i.e., far) analogies (kitten:cat::spark-?) to induce a high relational mindset or within-domain (i.e., near) analogies (kitten:cat::puppy-?) (control condition). Next, they received a series of topic descriptions containing either one feature (some jobs are confining) or three features (some jobs are confining, repetitive, and unpleasant), and were asked to provide a summary phrase of the topic. Use of metaphoric language increased when topics contained more features, and was particularly frequent in the high relational mindset condition. This finding suggests that the relational mindset induction may have shifted attention toward abstract comparisons, thereby facilitating the creative use of language involving metaphors.
Correction to: A novel image database for social concepts reveals preference biases in autistic spectrum in adults and children
Modeling dependent group judgments: A computational model of sequential collaboration
Sequential collaboration describes the incremental process of contributing to online collaborative projects such as Wikipedia and OpenStreetMap. After a first contributor creates an initial entry, subsequent contributors create a sequential chain by deciding whether to adjust or maintain the latest entry which is updated if they decide to make changes. Sequential collaboration has recently been examined as a method for eliciting numerical group judgments. It was shown that in a sequential chain, changes become less frequent and smaller, while judgments become more accurate. Judgments at the end of a sequential chain are similarly accurate and in some cases even more accurate than aggregated independent judgments (wisdom of crowds). This is at least partly due to sequential collaboration allowing contributors to contribute according to their expertise by selectively adjusting judgments. However, there is no formal theory of sequential collaboration. We developed a computational model that formalizes the cognitive processes underlying sequential collaboration. It allows modeling both sequential collaboration and independent judgments, which are used as a benchmark for the performance of sequential collaboration. The model is based on internal distributions of plausible judgments that contributors use to evaluate the plausibility of presented judgments and to provide new judgments. It incorporates individuals' expertise and tendency to adjust presented judgments as well as item difficulty and the effects of the presented judgment on subsequent judgment formation. The model is consistent with previous empirical findings on change probability, change magnitude, and judgment accuracy incorporating expertise as a driving factor of these effects. Moreover, new predictions for long sequential chains were confirmed by an empirical study. Above and beyond sequential collaboration the model establishes an initial theoretical framework for further research on dependent judgments.
Parameter identifiability in evidence-accumulation models: The effect of error rates on the diffusion decision model and the linear ballistic accumulator
A variety of different evidence-accumulation models (EAMs) account for common response time and accuracy patterns in two-alternative forced choice tasks by assuming that subjects collect and sum information from their environment until a response threshold is reached. Estimates of model parameters mapped to components of this decision process can be used to explain the causes of observed behavior. However, such explanations are only meaningful when parameters can be identified, that is, when their values can be uniquely estimated from data generated by the model. Prior studies suggest that parameter identifiability is poor when error rates are low but have not systematically compared this issue across different EAMs. We conducted a simulation study investigating the identifiability and estimation properties of model parameters at low error rates in the two most popular EAMs: The diffusion decision model (DDM) and the linear ballistic accumulator (LBA). We found poor identifiability at low error rates for both models but less so for the DDM and for a larger number of trials. The DDM also showed better identifiability than the LBA at low trial numbers for a design with a manipulation of response caution. Based on our results, we recommend tasks with error rates between 15% and 35% for small, and between 5% and 35% for large trial numbers. We explain the identifiability problem in terms of trade-offs caused by correlations between decision-threshold and accumulation-rate parameters and discuss why the models differ in terms of their estimation properties.
Visual attention matters during word recognition: A Bayesian modeling approach
It is striking that visual attention, the process by which attentional resources are allocated in the visual field so as to locally enhance visual perception, is a pervasive component of models of eye movements in reading, but is seldom considered in models of isolated word recognition. We describe BRAID, a new Bayesian word-Recognition model with Attention, Interference and Dynamics. As most of its predecessors, BRAID incorporates three sensory, perceptual, and orthographic knowledge layers together with a lexical membership submodel. Its originality resides in also including three mechanisms that modulate letter identification within strings: an acuity gradient, lateral interference, and visual attention. We calibrated the model such that its temporal scale was consistent with behavioral data, and then explored the model's capacity to generalize to other, independent effects. We evaluated the model's capacity to account for the word length effect in lexical decision, for the optimal viewing position effect, and for the interaction of crowding and frequency effects in word recognition. We further examined how these effects were modulated by variations in the visual attention distribution. We show that visual attention modulates all three effects and that a narrow distribution of visual attention results in performance patterns that mimic those reported in impaired readers. Overall, the BRAID model could be conceived as a core building block, towards the development of integrated models of reading aloud and eye movement control, or of visual recognition of impaired readers, or any context in which visual attention does matter.
The influence of increasing color variety on numerosity estimation and counting
Previous research has suggested that numerosity estimation and counting are closely related to distributed and focused attention, respectively (Chong & Evans, WIREs Cognitive Science, 2(6), 634-638, 2011). Given the critical role of color in guiding attention, this study investigated its effects on numerosity processing by manipulating both color variety (single color, medium variety, high variety) and spatial arrangement (clustered, random). Results from the estimation task revealed that high color variety led to a perceptual bias towards larger quantities, regardless of whether colors were clustered or randomly arranged. This implies that distributed attention may engage in a global assessment of color richness, with less emphasis on spatial arrangement. In contrast, the effect of color on counting was influenced by spatial arrangement: performance improved with clustered colors but declined with random color distribution. This indicates that color interacts with spatial information to modulate focused attention during serial numerosity processing. Taken together, our findings provide new insights into the interaction between numerical cognition and attention, highlighting the need for theories and models of numerical cognition to take into account feature variety and contextual factors, such as the spatial arrangement of features. Additionally, in light of the widespread diversity in real-world environments, our findings could inform strategies to enhance behavioral adaptation to varying environmental conditions.
Noisy-channel language comprehension in aphasia: A Bayesian mixture modeling approach
Individuals with "agrammatic" receptive aphasia have long been known to rely on semantic plausibility rather than syntactic cues when interpreting sentences. In contrast to early interpretations of this pattern as indicative of a deficit in syntactic knowledge, a recent proposal views agrammatic comprehension as a case of "noisy-channel" language processing with an increased expectation of noise in the input relative to healthy adults. Here, we investigate the nature of the noise model in aphasia and whether it is adapted to the statistics of the environment. We first replicate findings that a) healthy adults (N = 40) make inferences about the intended meaning of a sentence by weighing the prior probability of an intended sentence against the likelihood of a noise corruption and b) their estimate of the probability of noise increases when there are more errors in the input (manipulated via exposure sentences). We then extend prior findings that adults with chronic post-stroke aphasia (N = 28) and healthy age-matched adults (N = 19) similarly engage in noisy-channel inference during comprehension. We use a hierarchical latent mixture modeling approach to account for the fact that rates of guessing are likely to differ between healthy controls and individuals with aphasia and capture individual differences in the tendency to make inferences. We show that individuals with aphasia are more likely than healthy controls to draw noisy-channel inferences when interpreting semantically implausible sentences, even when group differences in the tendency to guess are accounted for. While healthy adults rapidly adapt their inference rates to an increase in noise in their input, whether individuals with aphasia do the same remains equivocal. Further investigation of comprehension through a noisy-channel lens holds promise for a parsimonious understanding of language processing in aphasia and may suggest potential avenues for treatment.
Do we feel colours? A systematic review of 128 years of psychological research linking colours and emotions
Colour is an integral part of natural and constructed environments. For many, it also has an aesthetic appeal, with some colours being more pleasant than others. Moreover, humans seem to systematically and reliably associate colours with emotions, such as yellow with joy, black with sadness, light colours with positive and dark colours with negative emotions. To systematise such colour-emotion correspondences, we identified 132 relevant peer-reviewed articles published in English between 1895 and 2022. These articles covered a total of 42,266 participants from 64 different countries. We found that all basic colour categories had systematic correspondences with affective dimensions (valence, arousal, power) as well as with discrete affective terms (e.g., love, happy, sad, bored). Most correspondences were many-to-many, with systematic effects driven by lightness, saturation, and hue ('colour temperature'). More specifically, (i) LIGHT and DARK colours were associated with positive and negative emotions, respectively; (ii) RED with empowering, high arousal positive and negative emotions; (iii) YELLOW and ORANGE with positive, high arousal emotions; (iv) BLUE, GREEN, GREEN-BLUE, and WHITE with positive, low arousal emotions; (v) PINK with positive emotions; (vi) PURPLE with empowering emotions; (vii) GREY with negative, low arousal emotions; and (viii) BLACK with negative, high arousal emotions. Shared communication needs might explain these consistencies across studies, making colour an excellent medium for communication of emotion. As most colour-emotion correspondences were tested on an abstract level (i.e., associations), it remains to be seen whether such correspondences translate to the impact of colour on experienced emotions and specific contexts.
Increased attention towards progress information near a goal state
A growing body of evidence across psychology suggests that (cognitive) effort exertion increases in proximity to a goal state. For instance, previous work has shown that participants respond more quickly, but not less accurately, when they near a goal-as indicated by a filling progress bar. Yet it remains unclear when over the course of a cognitively demanding task do people monitor progress information: Do they continuously monitor their goal progress over the course of a task, or attend more frequently to it as they near their goal? To answer this question, we used eye-tracking to examine trial-by-trial changes in progress monitoring as participants completed blocks of an attentionally demanding oddball task. Replicating past work, we found that participants increased cognitive effort exertion near a goal, as evinced by an increase in correct responses per second. More interestingly, we found that the rate at which participants attended to goal progress information-operationalized here as the frequency of gazes towards a progress bar-increased steeply near a goal state. In other words, participants extracted information from the progress bar at a higher rate when goals were proximal (versus distal). In exploratory analysis of tonic pupil diameter, we also found that tonic pupil size increased sharply as participants approached a goal state, mirroring the pattern of gaze. These results support the view that people attend to progress information more as they approach a goal.
Distinct detection and discrimination sensitivities in visual processing of real versus unreal optic flow
We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both "real" optic flow stimuli containing information about self-movement in a three-dimensional scene and "unreal" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli. Furthermore, while discrimination sensitivity to the CoM location was not affected by stimulus duration for unreal optic flow stimuli, it showed a significant improvement when stimulus duration increased from 100 to 400 ms for real optic flow stimuli. These findings provide compelling evidence that the visual system employs distinct processing approaches for real versus unreal optic flow even when they are perfectly matched for two-dimensional global features and local motion signals. These differences reveal influences of self-movement in natural environments, enabling the visual system to uniquely process stimuli with significant survival implications.
The cost of perspective switching: Constraints on simultaneous activation
Visual perspective taking often involves transitioning between perspectives, yet the cognitive mechanisms underlying this process remain unclear. The current study draws on insights from task- and language-switching research to address this gap. In Experiment 1, 79 participants judged the perspective of an avatar positioned in various locations, observing either the rectangular or the square side of a rectangular cube hanging from the ceiling. The avatar's perspective was either consistent or inconsistent with the participant's, and its computation sometimes required mental transformation. The task included both single-position blocks, in which the avatar's location remained fixed across all trials, and mixed-position blocks, in which the avatar's position changed across trials. Performance was compared across trial types and positions. In Experiment 2, 126 participants completed a similar task administered online, with more trials, and performance was compared at various points within the response time distribution (vincentile analysis). Results revealed a robust switching cost. However, mixing costs, which reflect the ability to maintain multiple task sets active in working memory, were absent, even in slower response times. Additionally, responses to the avatar's position varied as a function of consistency with the participants' viewpoint and the angular disparity between them. These findings suggest that perspective switching is costly, people cannot activate multiple perspectives simultaneously, and the computation of other people's visual perspectives varies with cognitive demands.
Taking time: Auditory statistical learning benefits from distributed exposure
In an auditory statistical learning paradigm, listeners learn to partition a continuous stream of syllables by discovering the repeating syllable patterns that constitute the speech stream. Here, we ask whether auditory statistical learning benefits from spaced exposure compared with massed exposure. In a longitudinal online study on Prolific, we exposed 100 participants to the regularities in a spaced way (i.e., with exposure blocks spread out over 3 days) and another 100 in a massed way (i.e., with all exposure blocks lumped together on a single day). In the exposure phase, participants listened to streams composed of pairs while responding to a target syllable. The spaced and massed groups exhibited equal learning during exposure, as indicated by a comparable response-time advantage for predictable target syllables. However, in terms of resulting long-term knowledge, we observed a benefit from spaced exposure. Following a 2-week delay period, we tested participants' knowledge of the pairs in a forced-choice test. While both groups performed above chance, the spaced group had higher accuracy. Our findings speak to the importance of the timing of exposure to structured input and also for statistical learning outside of the laboratory (e.g., in language development), and imply that current investigations of auditory statistical learning likely underestimate human statistical learning abilities.
Age-related differences in information, but not task control in the color-word Stroop task
Older adults were found to struggle with tasks that require cognitive control. One task that measures the ability to exert cognitive control is the color-word Stroop task. Almost all studies that tested cognitive control in older adults using the Stroop task have focused on one type of control - Information control. In the present work, we ask whether older adults also show a deficit in another type of cognitive control - Task control. To that end, we tested older and younger adults by isolating and measuring two types of conflict - information conflict and task conflict. Information conflict was measured by the difference between color identification of incongruent color words and color identification of neutral words, while task conflict was measured by the difference between color identification of neutral words and color identification of neutral symbols and by the reverse facilitation effect. We tested how the behavioral markers of these two types of conflicts are affected under low task control conditions, which is essential for measuring task conflict behaviorally. Older adults demonstrated a deficit in information control by showing a larger information conflict marker, but not in task control markers, as no differences in task conflict were found between younger and older adults. These findings supported previous studies that work against theories that link the larger Stroop interference in older adults to a generic slowdown or a generic inhibitory failure. We discussed the relevancy of the results and future research directions in line with other Stroop studies that tested age-related differences in different control mechanisms.
The impact of relative word-length on effects of non-adjacent word transpositions
A recent study (Wen et al., Journal of Experimental Psychology: Human Perception and Performance, 50: 934-941, 2024) found no influence of relative word-length on transposed-word effects. However, following the tradition of prior research on effects of transposed words during sentence reading, the transposed words in that study were adjacent words (words at positions 2 and 3 or 3 and 4 in five-word sequences). We surmised that the absence of an influence of relative word-length might be due to word identification being too precise when the two words are located close to eye-fixation location, hence cancelling the impact of more approximate indices of word identity such as word length. We therefore hypothesized that relative word-length might impact on transposed-word effects when the transposition involves non-adjacent words. The present study put this hypothesis to test and found that relative word-length does modify the size of transposed-word effects with non-adjacent transpositions. Transposed-word effects are greater when the transposed words have the same length. Furthermore, a cross-study analysis confirmed that transposed-word effects are greater for adjacent than for non-adjacent transpositions.
Memories of hand movements are tied to speech through learning
Hand movements frequently occur with speech. The extent to which the memories that guide co-speech hand movements are tied to the speech they occur with is unclear. Here, we paired the acquisition of a new hand movement with speech. Thirty participants adapted a ballistic hand movement of a joystick to a visuomotor rotation either in isolation or while producing a word in time with their movements. Within participants, the after-effect of adaptation (i.e., the motor memory) was examined with or without co-incident speech. After-effects were greater for hand movements produced in the context in which adaptation occurred - i.e., with or without speech. In a second experiment, 30 new participants adapted a hand movement while saying the words "tap" or "hit". After-effects were greater when hand movements occurred with the specific word produced during adaptation. The results demonstrate that memories of co-speech hand movements are partially tied to the speech they are learned with. The findings have implications for theories of sensorimotor control and our understanding of the relationship between gestures, speech and meaning.
Product, not process: Metacognitive monitoring of visual performance during sustained attention
The performance of the human visual system exhibits moment-to-moment fluctuations influenced by multiple neurocognitive factors. To deal with this instability of the visual system, introspective awareness of current visual performance (metacognitive monitoring) may be crucial. In this study, we investigate whether and how people can monitor their own visual performance during sustained attention by adopting confidence judgments as indicators of metacognitive monitoring - assuming that if participants can monitor visual performance, confidence judgments will accurately track performance fluctuations. In two experiments (N 40), we found that participants were able to monitor fluctuations in visual performance during sustained attention. Importantly, metacognitive monitoring largely relied on the quality of target perception, a product of visual processing ("I lack confidence in my performance because I only caught a glimpse of the target"), rather than the states of the visual system during visual processing ("I lack confidence because I was not focusing on the task").
Nudges for people who think
The naiveté of the dominant 'cognitive-miser' metaphor of human thinking hampers theoretical progress in understanding how and why subtle behavioural interventions-'nudges'-could work. We propose a reconceptualization that places the balance in agency between, and the alignment of representations held by, people and choice architects as central to determining the prospect of observing behaviour change. We argue that two aspects of representational (mis)alignment are relevant: cognitive (how people construe the factual structure of a decision environment) and motivational (the importance of a choice to an individual). Nudging thinkers via the alignment of representations provides a framework that offers theoretical and practical advances and avoids disparaging people's cognitive capacities.
Cracking arbitrariness: A data-driven study of auditory iconicity in spoken English
Auditory iconic words display a phonological profile that imitates their referents' sounds. Traditionally, those words are thought to constitute a minor portion of the auditory lexicon. In this article, we challenge this assumption by assessing the pervasiveness of onomatopoeia in the English auditory vocabulary through a novel data-driven procedure. We embed spoken words and natural sounds into a shared auditory space through (a) a short-time Fourier transform, (b) a convolutional neural network trained to classify sounds, and (c) a network trained on speech recognition. Then, we employ the obtained vector representations to measure their objective auditory resemblance. These similarity indexes show that imitation is not limited to some circumscribed semantic categories, but instead can be considered as a widespread mechanism underlying the structure of the English auditory vocabulary. We finally empirically validate our similarity indexes as measures of iconicity against human judgments.
Putting the prime in priming: Using prime processing behavior to predict target structural processing
Structural priming effects are widespread and heavily relied upon to assess structural representation and processing. Whether these effects are caused by error-driven implicit learning, residual activation, a combination of these, or some other learning mechanism remains to be established. The current study used preexisting data and a novel data analysis approach that links processing at the prime to later processing at the target to better understand the nature of structural priming. This novel analytic approach was applied to total reading times from a previously published structural priming study in comprehension, which provided processing measures of the structurally critical regions of prime reduced-relative clause sentences. These were then used as predictors in a series of hierarchical linear models where analogous processing measures at the target sentence regions served as outcome variables. Separate sets of models were run for prime-target pairs that had the same structure (i.e., abstract priming) and those that had the same structure and initial verb (i.e., a lexical boost). Prime-to-target processing relationships were observed for both types of prime-target pairs, but showed very different patterns. This provides support for the claim that abstract priming effects and the lexical boost are caused by different mechanisms. Additionally, the observed effects were positive and so do not support the error-driven learning prediction that processing difficulty at the prime should lead to greater facilitation at the target. Overall, this novel method provides a new tool for investigating structural priming and processing.
The mnemonic potency of functional facts
Learning and remembering what things are used for is a capacity that is central to successfully living in any human culture. The current paper investigates whether functional facts (information about what an object is used for) are remembered more efficiently compared with nonfunctional facts. Experiment 1 presented participants with images of functionally ambiguous objects associated with a (made-up) name and a (made-up) fact that could relate either to the object's function or to something nonfunctional. Results show that recall of object names did not depend on whether they were associated with a functional or nonfunctional fact, while recall of the functional facts was significantly better than the nonfunctional facts. The second experiment replicated this main effect and further found that functional facts are remembered more efficiently after they have been associated with confirmatory (as opposed to disconfirmatory) feedback. It is suggested that semantic information is not unitary, and that one way of categorising semantic information is in terms of its adaptive relevance. Potential mechanisms are proposed and discussed, along with suggestions for future research.
Lexical integration of novel words learned through natural reading
Lexical competition between newly acquired and already established representations of written words is considered a marker of word integration into the mental lexicon. To date, studies about the emergence of lexical competition involved mostly artificial training procedures based on overexposure and explicit instructions for memorization. Yet, in real life, novel word encounters occur mostly without explicit learning intent, through reading texts with words appearing rarely. This study examined the lexical integration of words learned through text reading. In Experiment 1, two groups of participants read a short book with embedded novel words. Only one group was asked to memorize the unfamiliar words. In the semantic categorization task, we found evidence for lexical competition with slower responses to existing orthographic neighbors (e.g., hublot) of the newly learned words (e.g., hubbot) than to a set of matched items. This effect was found independently of the group 24 h after initial exposure. In addition, a facilitation pattern was observed immediately after the reading session. However, post hoc analyses suggested that the competition effect was mainly driven by the data from the group receiving explicit learning instructions. Experiment 2 aimed to replicate the findings obtained in the group without explicit learning instructions. The results revealed the same pattern, characterized by a facilitatory effect immediately after the reading session and an inhibitory effect 24 h after the exposure. Overall, these results showed that lexical competition emerged from a naturalistic reading after a delay, regardless of whether participants were asked to learn novel words or not.