Spatial Cognition and Computation

Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective
Pyers JE, Perniss P and Emmorey K
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
Learning to Reach to Locations Encoded from Imaging Displays
Wu B, Klatzky RL and Stetten G
The present study investigated how people learn to correct errors in actions directed toward cognitively encoded spatial locations. Subjects inserted a stylus to reach a hidden target localized by means of ultrasound imaging and portrayed with a scaled graph. As was found previously (Wu et al., 2005), subjects initially underestimated the target location but corrected their responses when given training with feedback. Three experiments were conducted to examine whether the error correction occurred at (1) the mapping from the input to a mental representation of target location; (2) the mapping from the representation of target location to the intended insertion response, or (3) the mapping from intended response to action. Experiment 1 and Experiment 3 disconfirmed Mappings 1 and 3, respectively, by showing that training did not alter independent measures of target localization or the action of aiming. Experiment 2 showed that the output of Mapping 2, the planned response -- measured as the initial insertion angle -was corrected over trials, and the correction magnitude predicted the response to a transfer stimulus with a new represented location.
Three dimensional spatial memory and learning in real and virtual environments
Oman CM, Shebilske WL, Richards JT, Tubre TC, Beall AC and Natapoff A
Human orientation and spatial cognition partly depends on our ability to remember sets of visual landmarks and imagine their relationship to us from a different viewpoint. We normally make large body rotations only about a single axis which is aligned with gravity. However, astronauts who try to recognize environments rotated in 3 dimensions report that their terrestrial ability to imagine the relative orientation of remembered landmarks does not easily generalize. The ability of human subjects to learn to mentally rotate a simple array of six objects around them was studied in 1-G laboratory experiments. Subjects were tested in a cubic chamber (n = 73) and a equivalent virtual environment (n = 24), analogous to the interior of a space station node module. A picture of an object was presented at the center of each wall. Subjects had to memorize the spatial relationships among the six objects and learn to predict the direction to a specific object if their body were in a specified 3D orientation. Percent correct learning curves and response times were measured. Most subjects achieved high accuracy from a given viewpoint within 20 trials, regardless of roll orientation, and learned a second view direction with equal or greater ease. Performance of the subject group that used a head mounted display/head tracker was qualitatively similar to that of the second group tested in a physical node simulator. Body position with respect to gravity had a significant but minor effect on performance of each group, suggesting that results may also apply to weightless situations. A correlation was found between task performance measures and conventional paper-and-pencil tests of field independence and 2&3 dimensional figure rotation ability.
Transformations and representations supporting spatial perspective taking
Yu AB and Zacks JM
Spatial perspective taking is the ability to reason about spatial relations relative to another's viewpoint. Here, we propose a mechanistic hypothesis that relates mental representations of one's viewpoint to the transformations used for spatial perspective taking. We test this hypothesis using a novel behavioral paradigm that assays patterns of response time and variation in those patterns across people. The results support the hypothesis that people maintain a schematic representation of the space around their body, update that representation to take another's perspective, and thereby to reason about the space around their body. This is a powerful computational mechanism that can support imitation, coordination of behavior, and observational learning.
Which way is the bookstore? A closer look at the judgments of relative directions task
Huffman DJ and Ekstrom AD
We present a detailed analysis of a widely used assay in human spatial cognition, the judgments of relative direction (JRD) task. We conducted three experiments involving virtual navigation interspersed with the JRD task, and included confidence judgments and map drawing as additional metrics. We also present a technique for assessing the similarity of the cognitive representations underlying performance on the JRD and map drawing tasks. Our results support the construct validity of the JRD task and its connection to allocentric representation. Additionally, we found that chance performance on the JRD task depends on the distribution of the angles of participants' responses, rather than being constant and 90 degrees. Accordingly, we present a method for better determining chance performance.
Visually Scaling Distance from Memory: Do Visible Midline Boundaries Make a Difference?
Hund AM, Plumert JM and Recker KM
We examined how 4- to 5-year-old children and adults use perceptual structure (visible midline boundaries) to visually scale distance. Participants completed scaling and no scaling tasks using learning and test mats that were 16 and 64 inches. No boundaries were present in Experiment 1. Children and adults had more difficulty in the scaling than no scaling task when the test mat was 64 inches but not 16 inches. Experiment 2 was identical except visible midline boundaries were present. Again, participants had more difficulty in the scaling than no scaling task when the test mat was 64 inches, suggesting they used the test mat edges (not the midline boundary) as perceptual anchors when scaling from the learning to the test mat.
Test of a Relationship between Spatial Working Memory and Perception of Symmetry Axes in Children 3 to 6 Years of Age
Wu Y and Schutte AR
Children's memory responses to a target location in a homogenous space change from being biased towards the midline of the space to being biased away. According to Dynamic Field Theory (DFT) (e.g., Schutte & Spencer, 2009), improvement in the perception of the midline symmetry axis contributes to this transition. Simulations of DFT using a 3-year-old parameter setting showed that memory biases at intermediate target locations were related to the perception of midline. Empirical results indicated that better perception of midline was associated with greater memory biases away at the 20° and 40° targets in 3-year-olds, and greater biases away at 60° in 4- to 6-year-olds. Findings support the DFT in that perception of midline is associated with memory biases.
Unraveling the contribution of left-right language on spatial perspective taking
Abarbanell L and Li P
We examine whether acquiring left/right language affects children's ability to take a non-egocentric left-right perspective. In Experiment 1, we tested 10-13 year-old Tseltal (Mayan) and Spanish-speaking children from the same community on a task that required they retrieve a coin they previously seen hidden in one of four boxes to the left/right/front/back of a toy sheep after the entire array was rotated out of view. Their performance on the left/right boxes correlated positively with their comprehension and use of left-right language. In Experiment 2, we found that training Tseltal-speaking children to apply left-right lexical labels to represent the location of the coin improved performance, but improvement was more robust among a second group of children trained to use gestures instead.
The influence of landmark visualization style on task performance, visual attention, and spatial learning in a real-world navigation task
Kapaj A, Hilton C, Lanini-Maggi S and Fabrikant SI
Depicting landmarks on mobile maps is an increasingly popular countermeasure to the negative effect that navigation aids have on spatial learning - landmarks guide visual attention and facilitate map-to-environment information matching. However, the most effective method to visualize landmarks on mobile map aids remains an open question. We conducted a real-world navigation study outdoors to evaluate the influence of realistic vs. abstract 3D landmark visualization styles on wayfinders' navigation performance, visual attention, and spatial learning. While navigating with realistic landmarks, low-spatial-ability wayfinders focused more on the landmarks in the environment and demonstrated improved knowledge of directions between landmarks. Our findings emphasize the importance of visual realism when enriching navigation aids with landmarks to guide attention and enhance spatial learning for low-spatial-ability wayfinders.