Viewed touch influences tactile detection by altering decision criterion
Our tactile perception is shaped not only by somatosensory input but also by visual information. Prior research on the effect of viewing touch on tactile processing has found higher tactile detection rates when paired with viewed touch versus a control visual stimulus. Therefore, some have proposed a vicarious tactile system that activates somatosensory areas when viewing touch, resulting in enhanced tactile perception. However, we propose an alternative explanation: Viewing touch makes the observer more liberal in their decision to report a tactile stimulus relative to not viewing touch, also resulting in higher tactile detection rates. To disambiguate between the two explanations, we examined the effect of viewed touch on tactile sensitivity and decision criterion using signal detection theory. In three experiments, participants engaged in a tactile detection task while viewing a hand being touched or approached by a finger, a red dot, or no stimulus. We found that viewing touch led to a consistent, liberal criterion shift but inconsistent enhancement in tactile sensitivity relative to not viewing touch. Moreover, observing a finger approach the hand was sufficient to bias the criterion. These findings suggest that viewing touch influences tactile performance by altering tactile decision mechanisms rather than the tactile perceptual signal.
Disentangling decision errors from action execution in mouse-tracking studies: The case of effect-based action control
Mouse-tracking is regarded as a powerful technique to investigate latent cognitive and emotional states. However, drawing inferences from this manifold data source carries the risk of several pitfalls, especially when using aggregated data rather than single-trial trajectories. Researchers might reach wrong conclusions because averages lump together two distinct contributions that speak towards fundamentally different mechanisms underlying between-condition differences: influences from online-processing during action execution and influences from incomplete decision processes. Here, we propose a simple method to assess these factors, thus allowing us to probe whether process-pure interpretations are appropriate. By applying this method to data from 12 published experiments on ideomotor action control, we show that the interpretation of previous results changes when dissociating online processing from decision and initiation errors. Researchers using mouse-tracking to investigate cognition and emotion are therefore well advised to conduct detailed trial-by-trial analyses, particularly when they test for direct leakage of ongoing processing into movement trajectories.
Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions
In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.
Where does the processing of size meet the processing of space?
Previous studies revealed an S-R compatibility effect between physical stimulus size and response location, with faster left (right) responses to small (large) stimuli, respectively, as compared to the reverse assignments. Here, we investigated the locus of interactions between the processing of size and spatial locations. In Experiment 1, we explored whether stimulus size and stimulus location interact at a perceptual level of processing when responses lack spatiality. The stimuli varied on three feature dimensions (color, size, location), and participants responded vocally to each feature in a separate task. Most importantly, we failed to observe a size-location congruency effect in the color-naming task where S-R compatibility effects were excluded. In Experiment 2, responses to color were spatial, that is, key-presses with the left and right hand. With these responses there was a congruency effect. In addition, we tested the interaction of the size-location compatibility effect with the Simon effect, which is known to originate at the stage of response selection. We observed an interaction between the two effects only with a subsample of participants with slower reaction times (RTs) and a larger size-location compatibility effect in a control condition. Together, the results suggest that the size-location compatibility effect arises at the response selection stage. An extended leaky, competing accumulator model with independent staggered impacts of stimulus size and stimulus location on response selection fits the data of Experiment 2 and specifies how the size-location compatibility effect and the Simon effect can arise during response selection.
Can the left hand benefit from being right? The influence of body side on perceived grasping ability
Right-handed individuals (RHIs) demonstrate perceptual biases towards their right hand, estimating it to be larger and longer than their left. In addition, RHIs estimate that they can grasp larger objects with their right hand than their left. This study investigated whether visual information specifying handedness enhances biases in RHIs' perceptions of their action capabilities. Twenty-two participants were placed in an immersive virtual environment in which self-animated, virtual hands were either presented congruently to their physical hand or mirrored. Following a calibration task, participants estimated their maximum grasp size by adjusting the size of a virtual block until it reached the largest size they thought they could grasp. The results showed that, consistent with research outside of virtual reality, RHIs gave larger estimates of maximum grasp when using their right physical hand than their left. However, this difference remained regardless of how the hand was virtually presented. This finding suggests that proprioceptive feedback may be more important than visual feedback when estimating maximum grasp. In addition, visual feedback on handedness does not appear to enhance biases in perceptions of maximum grasp with the right hand. Considerations for further research into the embodiment of mirrored virtual limbs are discussed.
Monkeys overestimate connected arrays in a relative quantity task: A reverse connectedness illusion
Humans and many other species show consistent patterns of responding when making relative quantity ("more or less") judgments of stimuli. This includes the well-established ratio effect that determines the degree of discriminability among sets of items according to Weber's Law. However, humans and other species also are susceptible to some errors in accurately representing quantity, and these illusions reflect important aspects of the relation of perception to quantity representation. One newly described illusion in humans is the connectedness illusion, in which arrays with items that are connected to each other tend to be underestimated relative to arrays without such connection. In this pre-registered report, we assessed whether this illusion occurred in other species, testing rhesus macaque monkeys and capuchin monkeys. Contrary to our pre-registered predictions, monkeys showed an opposite bias to humans, preferring to select arrays with connected items as being more numerous. Thus, monkeys do not show this illusion to the same extent as humans.
Correction to: On the relationship between spatial attention and semantics in the context of a Stroop paradigm
Parafoveal N400 effects reveal that word skipping is associated with deeper lexical processing in the presence of context-driven expectations
Readers are able to begin processing upcoming words before directly fixating them, and in some cases skip words altogether (i.e., never fixated). However, the exact mechanisms and recognition thresholds underlying skipping decisions are not entirely clear. In the current study, we test whether skipping decisions reflect instances of more extensive lexical processing by recording neural language processing (via electroencephalography; EEG) and eye movements simultaneously, and we split trials based on target word-skipping behavior. To test lexical processing of the words, we manipulated the orthographic and phonological relationship between upcoming preview words and a semantically correct (and in some cases, expected) target word using the gaze-contingent display change paradigm. We also manipulated the constraint of the sentences to investigate the extent to which the identification of sublexical features of words depends on a reader's expectations. We extracted fixation-related brain potentials (FRPs) during the fixation on the preceding word (i.e., in response to parafoveal viewing of the manipulated previews). We found that word skipping is associated with larger neural responses (i.e., N400 amplitudes) to semantically incongruous words that did not share a phonological representation with the correct word, and this effect was only observed in high-constraint sentences. These findings suggest that word skipping can be reflective of more extensive linguistic processing, but in the absence of expectations, word skipping may occur based on less fine-grained linguistic processing and be more reflective of identification of plausible or expected sublexical features rather than higher-level lexical processing (e.g., semantic access).
Temporal error monitoring: Does agency matter?
Error monitoring is the ability to report one's errors without relying on feedback. Although error monitoring is investigated mostly with choice tasks, recent studies have discovered that participants parametrically also keep track of the magnitude and direction of their temporal, spatial, and numerical judgment errors. We investigated whether temporal error monitoring relies on internal generative processes that lead to the to-be-judged first-order timing performance. We hypothesized that if the endogenous processes underlie temporal error monitoring, one can monitor timing errors in emitted but not observed timing behaviors. We conducted six experiments to test this hypothesis. The first two experiments showed that confidence ratings were negatively related to error magnitude only in emitted behaviors, but error directionality judgments of observed behaviors were more precise. Experiment 3 replicated these effects even after controlling for the motor aspects of first-order timing performance. The last three experiments demonstrated that belief of agency (i.e., believing that the error belongs to the self or someone else) was critical in accounting for the confidence rating effects observed in the first two experiments. The precision of error directionality judgments was higher in the non-agency condition. These results show that confidence is sensitive to belief, and short-long judgment is sensitive to the actual agency of timing behavior (i.e., whether the behavior was emitted by the self or someone else).
What do we see behind an occluder? Amodal completion of statistical properties in complex objects
When a spiky object is occluded, we expect its spiky features to continue behind the occluder. Although many real-world objects contain complex features, it is unclear how more complex features are amodally completed and whether this process is automatic. To investigate this issue, we created pairs of displays with identical contour edges up to the point of occlusion, but with occluded portions exchanged. We then asked participants to search for oddball targets among distractors and asked whether relations between searches involving occluded displays would match better with relations between searches involving completions that are either globally consistent or inconsistent with the visible portions of these displays. Across two experiments involving simple and complex shapes, search times involving occluded displays matched better with those involving globally consistent compared with inconsistent displays. Analogous analyses on deep networks pretrained for object categorization revealed a similar pattern of results for simple but not complex shapes. Thus, deep networks seem to extrapolate simple occluded contours but not more complex contours. Taken together, our results show that amodal completion in humans is sophisticated and can be based on extrapolating global statistical properties.
Effect of attention on ensemble perception: Comparison between exogenous attention, endogenous attention, and depth
Ensemble perception is an important ability of human beings that allows one to extract summary information for scenes and environments that contain information that far exceeds the processing limit of the visual system. Although attention has been shown to bias ensemble perception, two important questions remain unclear: (1) whether direct manipulations on different types of spatial attention could produce similar effects on ensembles and (2) whether factors potentially influencing the attention distribution, such as depth perception, could evoke an indirect effect of attention on ensemble representation. This study aims to address these questions. In Experiments 1 and 2, two types of precues were used to evoke exogenous and endogenous attention, respectively, and the ensemble color perceptions were examined. We found that both exogenous and endogenous attention biased ensemble representation towards the attended items, and the latter produced a greater effect. In Experiments 3 and 4, we examined whether depth perception could affect color ensembles by indirectly influencing attention allocation in 3D space. The items were separated in two depth planes, and no explicit cues were applied. The results showed that color ensemble was biased to closer items when depth information was task relevant. This suggests that ensemble perception is naturally biased in 3D space, probably through the mechanism of attention. Computational modeling consistently showed that attention exerted a direct shift on the ensemble statistics rather than averaging the feature values over the cued and noncued items, providing evidence against an averaging process of individual perception.
Crossmodal correspondence of elevation/pitch and size/pitch is driven by real-world features
Crossmodal correspondences are consistent associations between sensory features from different modalities, with some theories suggesting they may either reflect environmental correlations or stem from innate neural structures. This study investigates this question by examining whether retinotopic or representational features of stimuli induce crossmodal congruency effects. Participants completed an auditory pitch discrimination task paired with visual stimuli varying in their sensory (retinotopic) or representational (scene integrated) nature, for both the elevation/pitch and size/pitch correspondences. Results show that only representational visual stimuli produced crossmodal congruency effects on pitch discrimination. These results support an environmental statistics hypothesis, suggesting crossmodal correspondences rely on real-world features rather than on sensory representations.
The perceptual and mnemonic effects of ensemble representation on individual size representation
Our visual world consists of multiple objects, necessitating the identification of individual objects. Nevertheless, the representation of visual objects often exerts influence on each other. Even when we selectively attend to a subset of visual objects, the representations of surrounding items are encoded and influence the processing of the attended item(s). However, it remains unclear whether the effect of group ensemble representation on individual item representation occurs at the perceptual encoding phase, during the memory maintenance period, or both. Therefore, the current study conducted visual psychophysics experiments to investigate the contributions of perceptual and mnemonic bias on the observed effect of ensemble representation on individual size representation. Across five experiments, we found a consistent pattern of repulsive ensemble bias, such that the size of an individual target circle was consistently reported to be smaller than it actually was when presented alongside other circles with larger mean size, and vice versa. There was a perceptual component to the bias, but mnemonic factors also influenced its magnitude. Specifically, the repulsion bias was strongest with a short retention period (0-50 ms), then reduced within a second to a weaker magnitude that remained stable for a longer retention period (5,000 ms). Such patterns of results persisted when we facilitated the processing of ensemble representation by increasing the set size (Experiment 1B) or post-cueing the target circle so that attention was distributed across all items (Experiment 2B).
Eye-tracking analysis of attentional disengagement in phobic and non-phobic individuals
This study investigated threat-related attention biases using a new visual search paradigm with eye tracking, which allows for measuring attentional disengagement in isolation. This is crucial as previous studies have been unable to distinguish between engagement, disengagement, and behavioral freezing. Thirty-three participants (M = 28.75 years, SD = 8.98; 21 women) with self-reported specific phobia (spiders, snakes, and pointed objects) and their matched controls (M = 28.38 years, SD = 8.66; 21 women) took part in the experiment. The participants were instructed to initially focus on a picture in the center of the screen, then search for a target picture in an outer circle consisting of six images, and respond via a button press whether the object in the target picture was oriented to the left or right. We found that phobic individuals show delayed disengagement and slower decision times compared with non-phobic individuals, regardless of whether the stimulus was threat-related or neutral. These results indicate that phobic individuals tend to exhibit poorer attentional control mechanisms and problems inhibiting irrelevant information. We also confirmed a threat-unrelated shared feature effect with complex stimuli (delayed disengagement when an attended stimulus and an unattended target share common stimulus features). This process might play a role in various experimental setups investigating attentional disengagement that has not yet been considered. These findings are important, as good attentional control may serve as a protective mechanism against anxiety disorders.
Influence of musical training on temporal productions when using fast and slow counting paces
The aim of the study was to assess the ability to maintain a steady pace during a counting task, aloud or silently, when a fast (28 counts every 900 ms) or slow (18 counts every 1,400 ms) pace is adopted (target = 25,200 ms), and to test whether ability is the same for musician and nonmusicians. The study analyzes the mean and variability of 30 temporal productions. The results show more variability (a larger coefficient of variation: standard deviation/mean production) in the condition where the pace is slow, a finding consistent with previous reports with this task. This finding applies here in both the aloud and silent counting conditions and, most importantly, applies to both musicians and nonmusicians. The results also indicate that there is no significant difference for the absolute error (|mean production - target duration|). In brief, the capacity to keep variability low when maintaining a pace seems to gain benefit from musical training, and this training difference does not depend on counting aloud versus silently and is not restricted to brief intervals.
Contextual cuing survives an interruption from an endogenous cue for attention
Three experiments explored how the repetition of a visual search display guides search during contextual cuing under conditions in which the search process is interrupted by an instructional (endogenous) cue for attention. In Experiment 1, participants readily learned about repeated configurations of visual search, before being presented with an endogenous cue for attention towards the target on every trial. Participants used this cue to improve search times, but the repeated contexts continued to guide attention. Experiment 2 demonstrated that the presence of the endogenous cue did not impede the acquisition of contextual cuing. Experiment 3 confirmed the hypothesis that the contextual cuing effect relies largely on localized distractor contexts, following the guidance of attention. Together, the experiments point towards an interplay between two drivers of attention: after the initial guidance of attention, memory representations of the context continue to guide attention towards the target. This suggests that the early part of visual search is inconsequential for the development and maintenance of the contextual cuing effect, and that memory representations are flexibly deployed when the search procedure is dramatically interrupted.
Inhibition of return in a 3D scene depends on the direction of depth switch between cue and target
Inhibition of return (IOR) is a phenomenon that reflects slower target detection when the target appears at a previously cued rather than uncued location. In the present study, we investigated the extent to which IOR occurs in three-dimensional (3D) scenes comprising pictorial depth information. Peripheral cues and targets appeared on top of 3D rectangular boxes placed on the surface of a textured ground plane in virtual space. When the target appeared at a farther location than the cue, the magnitude of the IOR effect in the 3D condition remained similar to the values found in the two-dimensional (2D) control condition (IOR was depth-blind). When the target appeared at a nearer location than the cue, the magnitude of the IOR effect was significantly attenuated (IOR was depth-specific). The present findings address inconsistencies in the literature on the effect of depth on IOR and support the notion that visuospatial attention exhibits a near-space advantage even in 3D scenes consisting entirely of pictorial depth information.
Attention focused on memory: The episodic flanker effect with letters, words, colors, and pictures
We report 10 experiments exploring the proposition that memory retrieval is perceptual attention turned inward. The experiments adapt the Eriksen and Eriksen perceptual flanker effect to a memory task in which subjects must decide whether a cued item in a probe display appeared in the same position in a memory list. Previous research with this episodic flanker task found distance and compatibility effects like those in the perceptual flanker task, suggesting that the same attentional spotlight is turned inward in memory retrieval. The previous experiments used lists of six consonants. The experiments reported here were designed to generalize the results to a broader range of conditions, from letters to words, colors, and pictures, and from set size 6 to set sizes of 4 and 5. Experiments 1-4 varied distance and set size with lists of four, five, or six letters, words, colors, and pictures, respectively. The distance effect was observed with all materials and all set sizes. Experiments 5-8 varied compatibility by presenting context items in the probe that were either the same as the memory list (and therefore compatible with "yes" responses and incompatible with "no" responses) or different from the memory list (and therefore incompatible with "yes" responses and compatible with "no" responses). We found compatibility effects with all materials and all set sizes. These results support the proposition that memory retrieval is attention turned inward. Turned inward or outward, attention is a general process that applies the same computations to different kinds of materials.
Anisotropies related to representational gravity
Four experiments examined whether representational gravity, in which memory for the location of a previously-viewed target is displaced in the direction of implied gravitational attraction, occurs uniformly across a target. Participants viewed stationary, vertically-moving, or horizontally-moving targets of different sizes and at different heights within the picture plane. After a target vanished, participants indicated the remembered location of the top edge or bottom edge of that target. Significant anisotropies were found, as the remembered location of the top edge was displaced downward, whereas the remembered location of the bottom edge was not displaced or was displaced upward. Anisotropies along the vertical axis were not influenced by whether participants knew prior to target presentation which edge to remember or by whether targets were stationary or moved vertically, although there was a trend for anisotropies along the vertical axis to be reduced when targets moved horizontally. Larger targets and targets higher in the picture plane resulted in larger displacement when targets were stationary, although effects of size and height were diminished when targets were moving. If the top edge and bottom edge of a target are considered analogous to the trailing edge and leading edge of a moving target, respectively, then anisotropies related to representational gravity are similar to anisotropies previously reported for representational momentum for horizontally-moving targets (as direction of implied gravitational attraction is downward). The existence of such anisotropies has implications for the representation of space and for the localization of and interaction with stimuli in the environment.
Enhanced salience of edge frequencies in auditory pattern recognition
Within musical scenes or textures, sounds from certain instruments capture attention more prominently than others, hinting at biases in the perception of multisource mixtures. Besides musical factors, these effects might be related to frequency biases in auditory perception. Using an auditory pattern-recognition task, we studied the existence of such frequency biases. Mixtures of pure tone melodies were presented in six frequency bands. Listeners were instructed to assess whether the target melody was part of the mixture or not, with the target melody presented either before or after the mixture. In Experiment 1, the mixture always contained melodies in five out of the six bands. In Experiment 2, the mixture contained three bands that stemmed from the lower or the higher part of the range. As expected, Experiments 1 and 2 both highlighted strong effects of presentation order, with higher accuracies for the target presented before the mixture. Notably, Experiment 1 showed that edge frequencies yielded superior accuracies compared with center frequencies. Experiment 2 corroborated this finding by yielding enhanced accuracies for edge frequencies irrespective of the absolute frequency region. Our results highlight the salience of sound elements located at spectral edges within complex musical scenes. Overall, this implies that neither the high voice superiority effect nor the insensitivity to bass instruments observed by previous research can be explained by absolute frequency biases in auditory perception.