Integrated encoding of relations and objects in visual working memory
Comprehensive understanding of visual scenes necessitates grasping the relations among visual objects. Given the potentially pivotal role of visual working memory (VWM) in processing visual relations, it is important to investigate the representation of relations in VWM. In our previous study, we proposed the integrated storage hypothesis, postulating that relations and objects are stored together as an integrated structured representation in VWM. The present study aimed to test this hypothesis against the alternative separate encoding hypothesis by probing the irrelevant-distracting effect. Across three experiments, where participants memorized object shapes/colors while disregarding relations, an irrelevant-distracting effect was consistently observed across varying types of changes in relation and set sizes. Critically, recombining the probe with irrelevant relation from another memory item (Experiment 2) or reversing the relational roles of probed objects relative to the memory item (Experiment 3) were perceived as inconsistency with stored representations and impaired change detection. These findings supported the integrated storage hypothesis, indicating that the dynamic relations between the objects are automatically encoded alongside object identities to form an integrated structured representation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Flexible use of facial features supports face identity processing
People prioritize diagnostic features in classification tasks. However, it is not clear whether this priority is fixed or is flexibly applied depending on the specific classification decision, or how feature use behavior contributes to individual differences in performance. Here we examined whether flexibility in features used in a face identification task supports face recognition ability. In Experiment 1, we show that the facial features most useful for identification vary-to a surprising degree-depending on the specific face identity comparison at hand. While the ears and eyes were the most diagnostic for face identification in general, they were the most diagnostic feature for just 22% and 14% of identity decisions, respectively. In three subsequent experiments, we find that flexibility in feature use contributes to an individual's face identity matching ability. Higher face identification accuracy was associated with being aware of (Experiments 2 and 4) and attending to (Experiments 3 and 4) the most diagnostic features for a specific facial comparison. This conferred an enhanced benefit relative to focusing on features that were diagnostic of face identity decisions in general (Experiment 4). We conclude that adaptability in information sampling supports face recognition ability and discuss theoretical and applied implications. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
No need to execute: Omitted responses still yield response-response binding effects
In the literature on human action control, the binding and retrieval of responses are assumed to shape the coordination of more complex actions. Specifically, the consecutive execution of two responses is assumed to result in their integration into cognitive representations (so-called event files) and can be retrieved from that upon later response repetition, thereby influencing behavior. Against the background of ideomotor theory and more recent theorizing in the binding and retrieval in action control framework (Frings et al., 2020), we investigated whether response execution is necessary for binding and retrieval of responses. We manipulated whether the retrieving response (Experiment 1), as well as the to-be-bound response (Experiment 2), is executed or omitted. The results showed that responses do not need to be executed to retrieve other responses or to be bound to other responses. Apparently, activating the cognitive representation of a response sufficed for this response to trigger event file binding and retrieval. Our results are the first to show that response-response binding is not dependent on executing responses. Together, the results support the core assumptions of ideomotor theory and the binding and retrieval in action control framework, namely a common coding of action and perception. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Perceived duration of visual stimuli contracts due to crowding
Recent research on duration perception suggests that duration encoding is not a single general process but involves several separate processes, some of which are specific to visual modality. Moreover, different functional aspects of visual processing can influence duration perception in distinct ways. One of the most important functions of the visual system is to identify and recognize features, shapes, and objects. However, it is still unclear whether and how computations related to these processes affect duration perception. To clarify this issue, we used a spatial crowding phenomenon, which allows the dissociation of low-level feature extraction from high-level processes such as object recognition. We created letter and vernier stimuli matched for their low-level properties but different in their discriminability due to spatial crowding. Here, we show that stimuli that became more difficult to discriminate appeared shorter in duration (data collected in 2019-2023). This difference in perceived duration could not be explained by low-level stimulus properties, cognitive bias due to discriminability, or perceived stimulus onsets or offsets. These results suggest the existence of time-sensitive structures specific to visual processing of features, shapes, and objects that is affected by crowding. These findings support the notion of distributed timing mechanisms in the visual system. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Task-irrelevant inputs alter ensemble representations of faces within the spatial focus of attention
Spatial attention enhances processing of information, but how does unattended and task-irrelevant information influence visual processing within the spatial focus of attention? We tested this by asking participants to extract the average emotional expression of a set of sequentially presented faces while simultaneously presenting task-irrelevant faces at a spatially unattended and task-irrelevant location. Across several experiments, we found that participants' reports of the emotional expression of faces at the attended location were biased toward the task-irrelevant faces. For example, when happier faces were presented at the unattended location, participants were biased to perceive the attended faces as happier. A control experiment in which participants were asked to also detect probes at cued and uncued locations showed that spatial attention was directed towards the cued location as instructed. Together, these results reveal that unattended and task-irrelevant inputs do not only affect the efficiency of target processing, for example by slowing responses or lowering accuracies, but that they can systematically bias ensemble representations within the spatial focus of attention. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
My turn or yours? Me-you-distinction in feature-based action planning
Binding accounts propose that action planning involves temporarily binding codes of the action's unique features, such as its location and duration. Such binding becomes evident when another action (B) is initiated while maintaining the Action Plan A. Action B is usually impaired if it partially overlaps with the planned Action A (as opposed to full or no feature overlap). In Experiment 1, in which participants bimanually operated two keys, we replicated these partial overlap costs. In Experiment 2, two participants sat side by side, each handling one key. We tested whether Action B would be affected by duration overlap with the planned Action A of another person similarly as by duration overlap with a planned Action A of the participant's other hand. Here, we found no partial overlap costs. However, in Experiment 3, proposing a common reward yielded partial overlap costs. This suggests that in joint action planning, another person's action plan can impact own actions through feature binding, but only with sufficient incentives to corepresent the other's actions (i.e., when goal achievement depends on both participants' performance). This furthers the understanding of how we represent other people's yet-to-be-executed action plans alongside our own. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Approach versus avoidance and the polarity principle-On an unrecognized ambiguity of the approach/avoidance paradigm
The present study examined the role of polarity correspondence (Proctor & Cho, 2006) in the approach/avoidance task. It was hypothesized that the typically found approach/avoidance effect could (at least in part) be explained by matching polarities of the stimuli and the response alternatives. To test this hypothesis, polarity of the stimuli was manipulated in three experiments. Experiment 1 showed that two neutral categories elicited an approach/avoidance asymmetry similar to that typically found for positive and negative stimuli when the categorization of stimuli was framed as "yes (Category A)" versus "no (not Category A)." This pattern is explained by assuming a polarity match between the "yes" category and the approach response. Experiment 2 used positive (flowers) versus negative (insects) categories. In a control condition, a typical compatibility effect was found (i.e., positive [negative] items relatively facilitated approach [avoidance]). However, when the task consisted of categorizing insects as the + polarity ("yes, insect" vs. "no, no insect"), the compatibility effect reversed; it was significantly increased when flowers were the "yes" category. In Experiment 3, polarity of positive/negative stimuli (flowers/insects) was manipulated prior to completion of a standard approach/avoidance task with flowers and insects as stimuli. Approximately the same pattern of results (albeit less pronounced) was found as in Experiment 2. The results suggest that results with the approach/avoidance task interpreted in terms of valence or motivational relevance may be (partly) due to polarity differences. This should be taken into account if these effects are routinely interpreted in terms of valence or motivational relevance. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Submentalizing: Clarifying how domain general processes explain spontaneous perspective-taking
Demonstrations of spontaneous perspective-taking are thought to provide some of the best evidence to date for "implicit mentalizing"-the ability to track simple mental states in a fast and efficient manner. However, this evidence has been challenged by a "submentalizing" account proposing that these findings are merely attention-orienting effects. The present research aimed to clarify the cognitive processes responsible by measuring spontaneous perspective-taking while controlling for attention orienting. Four experiments employed the widely used dot perspective task, modified by changing the order that stimuli were presented so that responses would be less influenced by attention orienting. This modification had different effects on speed and accuracy of responding. For response times, it attenuated spontaneous perspective-taking effects for avatars as well as attention-orienting effects for arrows. For error rates, robust spontaneous perspective-taking effects remained that were unaffected by manipulations targeting attention orienting, but contingent upon there being two competing active task sets (self- and other perspectives). These results confirm that attention orienting explains response time effects revealed by the original version of the dot perspective task. Error rate results also reveal the crucial role played by domain-general executive processes in enabling selection between perspectives. The absence of independent evidence for implicit mentalizing lends support to a revised submentalizing account that incorporates executive functions alongside attention orienting. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Double training reveals an interval-invariant subsecond temporal structure in the brain
Subsecond temporal perception is critical for understanding time-varying events. Many studies suggest that subsecond timing is an intrinsic property of neural dynamics, distributed across sensory modalities and brain areas. However, our recent finding of the transfer of temporal interval discrimination (TID) learning across sensory modalities supports the existence of a more abstract and conceptual representation of subsecond time that guides the temporal processing of distributed mechanisms. One major challenge to this hypothesis is that TID learning consistently fails to transfer from trained intervals to untrained intervals. To address this issue, here, we examined whether this interval specificity can be removed with double training, a procedure originally developed to eliminate various specificities in visual perceptual learning. Specifically, participants practiced the primary TID task, the learning of which per se was specific to the trained interval (e.g., 100 ms). In addition, they also received exposure to a new interval (e.g., 200 ms) through a secondary and functionally independent tone-frequency discrimination task. This double training successfully enabled complete transfer of TID learning to the new interval, indicating that training improved an interval-invariant component of temporal interval perception, which supports our proposal of an abstract and conceptual representation of subsecond time in the brain. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
The one exception: The impact of statistical regularities on explicit sense of agency
Establishing causal beliefs by observing regularities between actions and events in the environment is a crucial part of goal-directed behavior. Sense of agency (SoA) describes the corresponding experience of generating and controlling actions and subsequent events. Investigating how SoA adapts to situational changes in action-effect contingency, we observed even singular disturbances of perfect action-effect contingencies to yield a striking impact on SoA formation. Moreover, we additionally included disturbances of regularity that are not directly linked to one's own actions. Doing so allowed us to investigate how SoA might be a concept that goes beyond own actions toward a more generalized, subjective representation of control regarding environmental events. Indeed, the present experiments establish that, while SoA is highly tuned toward action-effect relations, it is also sensitive to events that occur without one's own action contribution. SoA thus appears to be exceptionally sensitive to singular breakpoints of perfect control with agents disproportionally incorporating such events during SoA formation while at the same time building on a rich situation model. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Ordinal information, but not metric information, matters in binding feature with depth location in three-dimensional contexts
A basic function of human visual perception is the ability to recognize and locate objects in the environment. It has been shown that two-dimensional (2D) location can reliably bias judgments on object identity (spatial congruency bias; Golomb et al., 2014), suggesting that 2D location information is automatically bound with object features to induce such a bias. Although the binding problem of feature and location has been vigorously studied under various 2D settings, it remains unclear how depth location can be bound with object features in a three-dimensional (3D) setting. Here we conducted five experiments in various 3D contexts using the congruency bias paradigm, and found that changes of object's depth location could influence perceptual judgments on object features differently depending on whether its relative depth order with respect to others changed or not. Experiments 1 and 2 showed that the judgments on an object's color could be affected by changes in its ordinal depth, but not by changes in its absolute metric depth. Experiment 3 showed that the bias was asymmetric-changes in an object's color did bias the judgments on metric-depth location, but not if its depth order had changed. Experiments 4 and 5 investigated whether these findings could be generalized to a peripersonal near space and a large-scale far space, respectively, using more ecological virtual environments. Our findings suggest that ordinal depth plays a special role in feature-location binding: an object may be automatically bound with its relative depth relation, but not with its absolute metric-depth location. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Determinants of shared and idiosyncratic contributions to judgments of faces
Recent work has shown that the idiosyncrasies of the observer can contribute more to the variance of social judgments of faces than the features of the faces. However, it is unclear what conditions determine the relative contributions of shared and idiosyncratic variance. Here, we examine two conditions: type of judgment and diversity of face stimuli. First, we show that for simpler, directly observable judgments that are consistent across observers (e.g., masculinity) shared exceeds idiosyncratic variance, whereas for more complex and less directly observable judgments (e.g., trustworthiness), idiosyncratic exceeds shared variance. Second, we show that judgments of more diverse face images increase the amount of shared variance. Finally, using machine-learning methods, we examine how stimulus (e.g., incidental emotion resemblance, skin luminosity) and observer variables (e.g., race, age) contribute to shared and idiosyncratic variance of judgments. Overall, our results indicate that an observer's age is the most consistent and best predictor of idiosyncratic variance contributions to face judgments measured in the current research. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Toward a better approach for measuring visual-search slopes
The slope of the function relating response times to the number of stimuli in a visual-search display is commonly considered a measure of search speed and is extensively used to test theories of visual cognition. Unfortunately, this important measure is confounded in multiple ways so that many classical findings in the literature must be called into question. As a way out of this predicament, we here develop a new technique to measure search speed (data collected in 2022 and 2023): Instead of manipulating the number of stimuli that need to be searched via a set-size manipulation, we achieve the intended purpose by placing the search target at different spatial positions with respect to an a-priori-known search order. Reliably inducing such a search order is the main achievement of the present study, but we also report several additional data patterns that might turn out instrumental for future research on visual attention. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Salience effects on attentional selection are enabled by task relevance
Attention is a limited resource that must be carefully controlled to prevent distraction. Much research has demonstrated that distraction can be prevented by proactively suppressing salient stimuli to prevent them from capturing attention. It has been suggested, however, that prior studies showing evidence of suppression may have used stimuli that were not truly salient. This claim has been difficult to test because there are currently no agreed-upon methods to demonstrate that an object is salient. The current study aims to help resolve this by introducing a new technique to test the role of salience in attentional capture. Low- and high-salience singletons were generated via a manipulation of color contrast. An initial experiment then verified the manipulation of salience using a search task where the color singleton was the target and could only be found via its bottom-up popout. High-salience singletons were found much more easily than low-salience singletons, suggesting that salience powerfully influenced attention when task relevant. A following experiment then used the same stimulus displays but adapted the task so that the singletons were task-irrelevant distractors. Both low- and high-salience singletons were suppressed, suggesting neither was able to capture attention. These results challenge purely stimulus-driven accounts by showing that improving salience only enhances attentional allocation in situations where the object is also task relevant. The results are instead consistent with the signal suppression hypothesis, which predicts that task-irrelevant singletons can be suppressed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Mind wandering is associated with worsening attentional vigilance
The tendency for our minds to wander is a pervasive and disruptive influence on continued task performance. Models of sustained attention have implicated mind wandering, moments when attention has turned inwards toward task-unrelated thought, in characteristic patterns of worsening performance with greater time-on-task, known as the vigilance decrement. Despite their theoretical connection, associations between mind wandering and the vigilance decrement have not been investigated systematically. Across two studies ( = 730), we evaluated covariance between within-task change in rates of probe-caught mind wandering and patterns of worsening behavioral task performance that characterize the vigilance decrement. Bivariate growth curve models characterized patterns of intraindividual linear change in mind wandering alongside concomitant changes in task accuracy, response time (RT), and RT variability. Importantly, models assessing the covariance between intraindividual change in mind wandering and behavioral outcome measures confirmed that increases in mind wandering are associated with patterns of worsening behavioral performance with greater time-on-task. In addition, we investigated the role of several moderating factors associated with patterns of within-task change: self-reported task interest and motivation, and individuals' propensity for mind wandering, and mindfulness in their daily lives. These factors moderated either the overall level or rate of within-task change in mind wandering. Our results provide support for models of sustained attention that directly implicate mind wandering in worsening behavioral performance with greater time-on-task in continuous performance tasks requiring sustained attention. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Age-related effects of immediate and delayed task switching in a targeted stepping task
The ability to quickly adapt steps while walking is pivotal for safe mobility. In a previous study of immediate switching between the two stepping tasks, older adults (OAs) performed worse than young adults (YAs). However, it remained unclear whether this difference was due to an inability to learn the tasks or an inability to quickly switch. Therefore, this study investigated treadmill walking while performing two targeted stepping tasks in conditions with immediate task switching (ITS) versus delayed task switching (DTS). Thirty YAs (aged 26.9 ± 3.1 years) and 32 OAs (aged 70.7 ± 7.3 years) were randomly assigned to either the ITS (ITS_YAs and ITS_OAs) or the DTS (DTS_YAs and DTS_OAs) group. Each group repeatedly switched between Task A (easy) and Task B (difficult) and completed three blocks (ABAB). Delayed switching involved 1-min breaks between both tasks. Results showed that ITS_OAs exhibited significantly more step errors and worse step accuracy, but that DTS_OAs were able to achieve a similar performance as YAs. Our findings underline an inability for quick gait adaptation during targeted stepping tasks in OAs, but the possibility to learn when delayed switching reduces task interference. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
The role of selection history in the learned predictiveness effect
Previous research has shown that cues that are good predictors of relevant outcomes receive more attention than nonpredictive cues. This attentional bias is thought to stem from the different predictive value of cues. However, because successful performance requires more attention to predictive cues, the bias may be a lingering effect of previous attention to cues (i.e., a selection history effect) instead. Two experiments assessed the contribution of predictive value and selection history to the bias produced by learned predictiveness. In a first task, participants responded to pairs of cues, only one of which predicted the correct response. A second task was superficially very similar, but the correct response was determined randomly on each trial and participants responded based on some physical characteristic of a target stimulus in each compound. Hence, in this latter task, participants had to pay more attention to the target stimuli, but these stimuli were not consistently associated with a specific response. Results revealed no differences in the attentional bias toward the relevant stimuli in the two tasks, suggesting that the bias induced by learned predictiveness is a consequence of deploying more attention to predictive stimuli during training. Thus, predictiveness may not bias attention by itself, adding nothing over and above the effect expected by selection history. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Evidence against the low-salience account of attentional suppression
Do salient distractors have the power to automatically capture attention? This question has led to a heated debate concerning the role of salience in attentional control. A potential resolution, called the signal suppression hypothesis, has proposed that salient items produce a bottom-up signal that vies for attention, but that salient stimuli can be suppressed via top-down control to prevent the capture of attention. This hypothesis, however, has been criticized on the grounds that the distractors used in initial studies of support were weakly salient. It has been difficult to know how seriously to take this low-salience criticism because assertions about high and low salience were made in the absence of a common (or any) measure of salience. The current study used a recently developed psychophysical technique to compare the salience of distractors from two previous studies at the center of this debate. Surprisingly, we found that the original stimuli criticized as having low salience were, if anything, more salient than stimuli from the later studies that purported to increase salience. Follow-up experiments determined exactly why the original stimuli were more salient and tested whether further improving salience could cause attentional capture as predicted by the low-salience account. Ultimately, these findings challenge purely stimulus-driven accounts of attentional control. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Learning to ignore visual onset distractors hinges on a configuration-dependent coordinates system
Decrement of attentional capture elicited by visual onset distractors, consistent with habituation, has been extensively characterized over the past several years. However, the type of spatial frame of reference according to which such decrement occurs in the brain remains unknown. Here, four related experiments are reported to shed light on this issue. Observers were asked to discriminate the orientation of a titled line while ignoring a salient but task-irrelevant visual onset that occurred on some trials. The experiments all involved an initial habituation phase, during which capture elicited by the onset distractor progressively decreased, as in prior studies. Importantly, in all experiments, the location of the target and the distractor remained fixed during this phase. After habituation was established, in a final test phase of the various experiments, the spatial arrangement of the target and the distractor was changed to test for the relative contribution to habituation of retinotopic, spatiotopic, and configuration-dependent visual representations. Experiment 1 indicated that spatiotopic representations contribute little, if at all, to the observed decrement in attentional capture. The results from Experiment 2 were compatible with the notion that such capture reduction occurs in either retinotopic- or configuration-specific representations. However, Experiment 3 ruled out the contribution of retinotopic representations, leaving configuration-specific representation as the sole viable interpretation. This conclusion was confirmed by the results of Experiments 4 and 5. In conclusion, visual onset distractors appear to be rejected at a level of the visual hierarchy where visual events are encoded in a configuration-specific or context-dependent manner. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Trial history contributes to the optimal tuning of attention
In visual search tasks, targets are difficult to find when they are similar to the surrounding nontargets. In this scenario, it is optimal to tune attention to target features that maximize the difference between target and nontargets. We investigated whether the optimal tuning of attention is driven by biases arising from previously attended stimuli (i.e., trial history). Consistent with the effects of trial history, we found that optimal tuning was stronger when a single target-nontarget relation was repeated than when two target-nontarget relations alternated randomly. Detailed analysis of blocks with random alternation showed that optimal tuning was stronger when the target-nontarget relation probed in the current trial matched the relation in the previous trial. We evaluated several mechanisms that may underlie the effects of trial history, such as priming of attentional set, switch costs, and sensory adaptation. However, none of the accounts was able to fully account for the pattern of results. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Haptic touch modulates size adaptation aftereffects on the hand
When we interact with objects using our hands, we derive their size through our skin. Prolonged exposure to an object leads to a perceptual size aftereffect: adapting to a larger/smaller object makes a subsequently perceived object to appear smaller/larger than its actual size. This phenomenon has been described as haptic as tactile sensations with kinesthetic feedback are involved. However, the exact role of different haptic components in generating this aftereffect remains largely underexplored. Here, we investigated how different aspects of haptic touch influence size perception. After adaptation to a large sphere with one hand and a small sphere with the other, participants touched two test spheres of equal or different sizes and judged which one felt larger. Similar haptic size adaption aftereffects were observed (a) when participants repeatedly grasped on and off the adapters, (b) when they simply continued to grasp the adapters without further hand movements, and (c) when the adapters were grasped without involving the fingers. All these conditions produced stronger aftereffects than a condition where the palms were simply resting on the adapter. Our findings suggest that the inclusion of grasp markedly increased the aftereffects, highlighting the pivotal role of haptic interactions in determining perceptual size adaptation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Action plan discarding leads to unbinding of action features
Action planning can be construed as the temporary binding of action features to form a representation known as an action file. This file is distinct from other possible, but currently not required actions of the behavioral repertoire. To further this action file approach, we investigated what happens with an initially planned action, which however, is discarded before execution. In two experiments we found consistent evidence for a quick unbinding of action features with discarding. Other possible mechanisms that action discarding might invoke, be it the paradox strengthening of a discarded action plan, the selective suppression of the otherwise intact plan, or the global suppression of all subsequent action, were not or at least less consistently supported. These findings provide a novel perspective on inhibitory action control, which we discuss with respect to its applications to other instances of such inhibitory control as studied in multitasking, stop-signal, directed forgetting, or response-reprogramming paradigms. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
The influence of affective voice on sound distance perception
Affective stimuli in our environment indicate reward or threat and thereby relate to approach and avoidance behavior. Previous findings suggest that affective stimuli may bias visual perception, but it remains unclear whether similar biases exist in the auditory domain. Therefore, we asked whether affective auditory voices (angry vs. neutral) influence sound distance perception. Two VR experiments (data collection 2021-2022) were conducted in which auditory stimuli were presented via loudspeakers located at positions unknown to the participants. In the first experiment ( = 44), participants actively placed a visually presented virtual agent or virtual loudspeaker in an empty room at the perceived sound source location. In the second experiment ( = 32), participants were standing in front of several virtual agents or virtual loudspeakers and had to indicate the sound source by directing their gaze toward the perceived sound location. Results in both preregistered experiments consistently showed that participants estimated the location of angry voice stimuli at greater distances than the location of neutral voice stimuli. We discuss that neither emotional nor motivational biases can account for these results. Instead, distance estimates seem to rely on listeners' representations regarding the relationship between vocal affect and acoustic characteristics. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
On relative word length and transposed-word effects
It is harder to decide that a sequence of words is ungrammatical when the ungrammaticality is created by transposing two words in a correct sentence (e.g., he wants green these apples), and it is harder to judge that two ungrammatical word sequences are different when the difference is created by transposing two words (e.g., green want these he apples-green these want he apples). In two experiments, we manipulated the relative length of the transposed words such that these words were either the same length (e.g., then you see can it) or different lengths (e.g., then you create can it). The same-length and different-length conditions were matched for syntactic category and word frequency. In Experiment 1 (speeded grammatical decision) we found no evidence for a modulation of transposed-word effects as a function of the relative length of the transposed words. We surmised that this might be due to top-down constraints being the main driving force behind the effects found in the grammatical decision task. However, this was also the case in Experiment 2 (same-different matching with ungrammatical sequences of words) where syntactic constraints were minimized. Given that skilled readers can read sentences composed of words of the same length, our results confirm that word length information alone is not used to encode the order of words in a sequence of words, and especially concerning the order of adjacent words in foveal/parafoveal vision. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Secondary capture: Salience information persistently drives attentional selection
It is well known that attention is captured by salient objects or events. The notion that attention is attracted by salience information present in the visual field is also at the heart of many influential models of attention. These models typically posit a hierarchy of saliency, suggesting that attention progresses from the most to the least salient item in the visual field. However, despite the significance of this claim in various models, research on eye movements challenges the idea that search strictly follows this saliency hierarchy. Instead, eye-tracking studies have suggested that saliency information has a transient impact, only influencing the initial saccade toward the most salient object, and only if executed swiftly after display onset. While these findings on overt eye movements are important, they do not address covert attentional processes occurring before a saccade is initiated. In the current series of experiments, we explored whether there was evidence for secondary capture-whether attention could be captured by another salient item after the initial capture episode. To explore this, we utilized displays with multiple distractors of varying levels of saliency. Our primary question was whether two distractors with different saliency levels would disrupt search more than a single, highly salient distractor. Across three experiments, clear evidence emerged indicating that two distractors interfered more with search than a single salient distractor. This observation suggests that following initial capture, secondary capture by the next most salient distractor occurred. These findings collectively support the idea that covert attention traverses the saliency hierarchy. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Measuring learning and attention to irrelevant distractors in contextual cueing
Visual search usually improves with repeated exposure to a search display. Previous research suggests that such a "contextual cueing" effect may be supported even by aspects of the search display that participants have been explicitly asked to ignore. Based on this evidence, it has been suggested that the development of contextual cueing over trials does not depend on selective attention. In the present series of experiments, we show that the most common strategy used to prevent participants from paying attention to task-irrelevant distractors often results in suboptimal selection. Specifically, we show that visual search is slower when search displays include many irrelevant distractors. Eye-tracking data show that this happens, at least in part, because participants fixate on them. These results cast doubts on previous demonstrations that contextual cueing is independent of selective attention. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Exploring the impact of temporally contiguous action effect on action control performance in typical development and attention-deficit/hyperactivity disorder
Previous work shows a reinforcing impact of action effect on behavior, independent of other reinforces such as positive outcomes or task success. Action-effect temporal contiguity plays an important role in such a reinforcing effect, possibly indicating a motor-based evaluation of their causal relationship. In the present study, we aimed to negate the reinforcing impact of an immediate action effect with task success by designing a task where red and green circle stimuli rapidly descended on the screen. Participants were instructed to respond only when a specific sequence of colored stimuli matched a predefined response rule. The temporal contiguity between the response and a perceptual effect was manipulated. We initially hypothesized an increased action tendency resulting in higher false alarm rates in the immediate (compared to 400 ms lag) action-effect condition. We also expected this pattern to be more pronounced in attention-deficit/hyperactivity disorder compared to typically developing individuals. Contrary to our expectations, results from three experiments showed a consistent pattern of a lower false alarm rate in the immediate compared to the 400 ms lag effect condition across both attention-deficit/hyperactivity disorder and typically developing groups. Additionally, while action-effect temporal contiguity did not significantly alter the overall rate of misses, we observed earlier improvements in both misses and false alarms in the immediate condition during the first blocks. Possible explanations for the complex impact of action effect on action tendency and action control are discussed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Why are some individuals better at using negative attentional templates to suppress distractors? Exploration of interindividual differences in cognitive control efficiency
Negative templates are based on foreknowledge of distractor features and can lead to more efficient visual search at the group level. However, large individual differences exist in the size of benefits induced by negative cues. The cognitive factors underlying these interindividual differences remain unknown. Previous research has suggested higher engagement of proactive control for negative templates compared to positive templates. We thus hypothesized that interindividual differences in proactive control efficiency may explain the large variability in negative cue benefits. A large data set made up of data from two previously published studies was reanalyzed ( = 139), with eye movements recorded in 36 participants. Individual proactive control efficiency was measured through reaction time (RT) variability. Participants with higher proactive control efficiency exhibited larger benefits after negative cues across two critical measures: Individuals with higher proactive control showed larger RT benefits following negative compared to neutral cues; similarly, individuals with higher proactive control exhibited lower first saccades to cued distractor items. No such relationship was observed for positive cues. Our results confirmed the existence of large interindividual differences in the benefits induced by negative attentional templates. Critically, we show that proactive control drives these interindividual differences in negative template use. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Temporal segmentation and "look ahead" simulation: Physical events structure visual perception of intuitive physics
How we perceive the physical world is not only organized in terms of objects, but also structured in time as sequences of events. This is especially evident in intuitive physics, with temporally bounded dynamics such as falling, occlusion, and bouncing demarcating the continuous flow of sensory inputs. While the spatial structure and attentional consequences of physical objects have been well-studied, much less is known about the temporal structure and attentional consequences of physical events in visual perception. Previous work has recognized physical events as units in the mind, and used presegmented object interactions to explore physical representations. However, these studies did not address whether and how perception imposes the kind of temporal structure that carves these physical events to begin with, and the attentional consequences of such segmentation during intuitive physics. Here, we use performance-based tasks to address this gap. In Experiment 1, we find that perception not only spontaneously separates visual input in time into physical events, but also, this segmentation occurs in a nonlinear manner within a few hundred milliseconds at the moment of the event boundary. In Experiment 2, we find that event representations, once formed, use coarse "look ahead" simulations to selectively prioritize those objects that are predictively part of the unfolding dynamics. This rich temporal and predictive structure of physical event representations, formed during vision, should inform models of intuitive physics. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Investigating the nature of spatial codes for different modes of Simon tasks: Evidence from congruency sequence effects and delta functions
Simon effects have been observed to arise from different modes of spatial information (e.g., physical location, arrow direction, and location word). The present study investigated whether different modes of spatial information elicit a unitary set of spatial codes when triggering a spatially corresponding response code. A pair of two different Simon tasks was presented in alternation: location- and arrow-based Simon tasks in Experiments 1 and 2, word- and location-based Simon tasks in Experiment 3, and arrow- and word-based Simon tasks in Experiment 4. Responses were collected using unimanual aimed-movement responses. Cross-task congruency sequence effects (CSEs) were found in Experiments 1 and 2, indicating a shared set of spatial codes between physical locations and arrow directions. Conversely, the absence of CSEs in Experiment 3 suggested that physical locations and location words elicited different sets of spatial codes. In Experiment 4, a CSE was evident in the arrow-based Simon task but not in the word-based one, implying an overlap in the spatial attributes of arrow directions with those of location words. Distributional analyses of the Simon effects revealed that different modes of spatial information yielded distinct temporal patterns of its activation and dissipation, implying quantitative differences in the Simon effects. The cross-comparisons of the CSE and delta function data indicated that the quantitative similarities in spatial modes did not correspond to the qualitative similarities, suggesting a crucial finding that each set of data reflects different aspects of the nature of the spatial codes. (PsycInfo Database Record (c) 2024 APA, all rights reserved).