Maintenance of subcategorical information during speech perception: revisiting misunderstood limitations
Accurate word recognition is facilitated by context. Some relevant context, however, occurs after the word. Rational use of such "right context" would require listeners to have maintained or about the word, thus allowing for consideration of possible alternatives when they encounter relevant right context. A classic study continues to be widely cited as evidence that subcategorical information maintenance is limited to highly ambiguous percepts and short time spans (Connine et al., 1991). More recent studies, however, using other phonological contrasts, and sometimes other paradigms, have returned mixed results. We identify procedural and analytical issues that provide an explanation for existing results. We address these issues in two reanalyses of previously published results and two new experiments. In all four cases, we find consistent evidence both limitations reported in Connine et al.'s seminal work, at least within the classic paradigms. Key to our approach is the introduction of an ideal observer framework to derive normative predictions for human word recognition expected if listeners maintain and integrate subcategorical information about preceding speech input rationally with subsequent context. We test these predictions in Bayesian mixed-effect analyses, including at the level of individual participants. While we find that the ideal observer fits participants' behavior better than models based on previously proposed limitations, we also find one previously unrecognized aspect of listeners' behavior that is unexpected under existing model, including the ideal observer.
Understanding words in context: A naturalistic EEG study of children's lexical processing
When listening to speech, adults rely on context to anticipate upcoming words. Evidence for this comes from studies demonstrating that the N400, an event-related potential (ERP) that indexes ease of lexical-semantic processing, is influenced by the predictability of a word in context. We know far less about the role of context in children's speech comprehension. The present study explored lexical processing in adults and 5-10-year-old children as they listened to a story. ERPs time-locked to the onset of every word were recorded. Each content word was coded for frequency, semantic association, and predictability. In both children and adults, N400s reflect word predictability, even when controlling for frequency and semantic association. These findings suggest that both adults and children use top-down constraints from context to anticipate upcoming words when listening to stories.
Lexically-specific syntactic restrictions in second-language speakers
In two structural priming experiments, we investigated the representations of lexically-specific syntactic restrictions of English verbs for highly proficient and immersed second language (L2) speakers of English. We considered the interplay of two possible mechanisms: generalization from the first language (L1) and statistical learning within the L2 (both of abstract structure and of lexically-specific information). In both experiments, L2 speakers with either Germanic or Romance languages as L1 were primed to produce dispreferred double-object structures involving non-alternating dative verbs. Priming occurred from ungrammatical double-object primes involving different non-alternating verbs (Experiment 1) and from grammatical primes involving alternating verbs (Experiment 2), supporting abstract statistical learning within the L2. However, we found no differences between L1-Germanic speakers (who have the double object structure in their L1) and L1-Romance speakers (who do not), inconsistent with the prediction for between-group differences of the L1-generalization account. Additionally, L2 speakers in Experiment 2 showed a lexical boost: There was stronger priming after (dispreferred) non-alternating same-verb double object primes than after (grammatical) alternating different-verb primes. Such lexically-driven persistence was also shown by L1 English speakers (Ivanova et al., 2012a) and may underlie statistical learning of lexically-dependent structural regularities. We conclude that lexically-specific syntactic restrictions in highly proficient and immersed L2 speakers are shaped by statistical learning (both abstract and lexically-specific) within the L2, but not by generalization from the L1.
When Time Shifts the Boundaries: Isolating the Role of Forgetting in Children's Changing Category Representations
In studies of children's categorization, researchers have typically studied how encoding characteristics of exemplars contribute to children's generalization. However, it is unclear whether children's internal cognitive processes alone, independent of new information, may also influence their generalization. Thus, we examined the role that one cognitive process, forgetting, plays in shaping children's category representations by conducting three experiments. In the first two experiments, participants (=37, =4.02 years; =32, =4.48 years) saw a novel object labeled by the experimenter and then saw five new objects with between one and five features changed from the learned exemplar. The experimenter asked whether each object was a member of the same category as the exemplar; children saw the five new objects either immediately or after a five-minute delay. Children endorsed category membership at higher rates at immediate test than at delayed test, suggesting that children's category representations became narrower over time. In Experiment 3, we investigated forgetting as a key mechanism underlying the narrowing found in Experiments 1 and 2. We showed participants (=34, =4.20 years) the same exemplars used in Experiments 1 and 2; then, either immediately or after a five-minute delay, we showed children seven individual object features and asked if each one had been part of the exemplar. Children's accuracy was lower after the delay, showing that they did indeed forget individual features. Taken together, these results show that forgetting plays an important role in changing children's newly-learned categories over time.
Inhibitory control of the dominant language: Reversed language dominance is the tip of the iceberg
Theories of speech production have proposed that in contexts where multiple languages are produced, bilinguals inhibit the dominant language with the goal of making both languages equally accessible. This process often overshoots this goal, leading to a surprising pattern: better performance in the nondominant vs. dominant language, or effects. However, the reliability of this effect in single word production studies with cued language switches has been challenged by a recent meta-analysis. Correcting for errors in this analysis, we find that dominance effects are reliably reduced and reversed during language mixing. Reversed dominance has also consistently been reported in the production of connected speech elicited by reading aloud of mixed language paragraphs. When switching, bilinguals produced translation-equivalent intrusion errors (e.g., saying instead of ) more often when intending to produce words in the dominant language. We show this dominant language vulnerability is not exclusive to switching out of the nondominant language and extends to non-switch words, linking connected speech results to patterns first reported in single word studies. Reversed language dominance is a robust phenomenon that reflects the tip of the iceberg of inhibitory control of the dominant language in bilingual language production.
A systematic evaluation of factors affecting referring expression choice in passage completion tasks
There is a long-standing controversy around the question of whether referent predictability affects pronominalization: while there are good theoretical reasons for this prediction (e.g., Arnold, 2008), the experimental evidence has been rather mixed. We here report on three highly powered studies that manipulate a range of factors that have differed between previous studies, in order to determine more exactly under which conditions a predictability effect on pronominalization can be found. We use a constrained as well as a free reference task, and manipulate verb type, antecedent ambiguity, length of NP and whether the stimuli are presented within a story context or not. Our results find the story context to be the single important factor that allows to elicit an effect of predictability on pronoun choice, in line with (Rosa and Arnold, 2017; Weatherford and Arnold, 2021). We also propose a parametrization for a rational speech act model, that reconciles the findings between many of the experiments in the literature.
Number and Syllabification of Following Consonants Influence Use of Long Versus Short Vowels in English Disyllables
Spelling-to-sound translation in English is particularly complex for vowels. For example, the pronunciations of ‹a› include the long vowel of ‹pper› and ‹scred› and the short vowel of ‹cctus› and ‹hppy›. We examined the factors that are associated with use of long versus short vowels by conducting analyses of English disyllabic words with single medial consonants and consonant sequences and three behavioral studies in which a total of 119 university students pronounced nonwords with these structures. The vocabulary analyses show that both the number of medial consonants and their syllabification influence vowel length. Participants were influenced by these aspects of context, some of which are not explicitly taught as a part of reading instruction. Although these results point to implicit statistical learning, participants produced fewer long vowels before single medial consonants than anticipated based on our vocabulary statistics for spelling-to-sound correspondences in disyllabic words. Participants also produced more long vowels before two identical consonant letters than anticipated given these statistics. We consider the reasons for these outcomes, and we also use the behavioral data to test two models of spelling-to-sound translation.
Adjective position and referential efficiency in American Sign Language: Effects of adjective semantics, sign type and age of sign exposure
Previous research has pointed at communicative efficiency as a possible constraint on language structure. Here we investigated adjective position in American Sign Language (ASL), a language with relatively flexible word order, to test the incremental efficiency hypothesis, according to which both speakers and signers try to produce efficient referential expressions that are sensitive to the word order of their languages. The results of three experiments using a standard referential communication task confirmed that deaf ASL signers tend to produce absolute adjectives, such as color or material, in prenominal position, while scalar adjectives tend to be produced in prenominal position when expressed as lexical signs, but in postnominal position when expressed as classifiers. Age of ASL exposure also had an effect on referential choice, with early-exposed signers producing more classifiers than late-exposed signers, in some cases. Overall, our results suggest that linguistic, pragmatic and developmental factors affect referential choice in ASL, supporting the hypothesis that communicative efficiency is an important factor in shaping language structure and use.
Context-based facilitation of semantic access follows both logarithmic and linear functions of stimulus probability
Stimuli are easier to process when context makes them predictable, but does context-based facilitation arise from preactivation of a limited set of relatively probable upcoming stimuli (with facilitation then linearly related to probability) or, instead, because the system maintains and updates a probability distribution across all items (with facilitation logarithmically related to probability)? We measured the N400, an index of semantic access, to words of varying probability, including unpredictable words. Word predictability was measured using both cloze probabilities and a state-of-the-art machine learning language model (GPT-2). We reanalyzed five datasets (n = 138) to demonstrate and then replicate that context-based facilitation on the N400 is graded, even among unpredictable words. Furthermore, we established that the relationship between word predictability and context-based facilitation combines linear and logarithmic functions. We argue that this composite function reveals properties of the mapping between words and semantic features and how feature- and word-related information is activated on-line.
Tradeoffs between Item and Order Information in Short-Term Memory
Recently, Guitard et al. (2021) used a two-list procedure and varied the kind of encoding carried out for each list (item or order encoding). They found dual-list impairment on an order test was consistently greater when the other list was also encoded for an order test, compared to when it was in the presence of another list encoded for an item test. They also found a dual-list cost relative to one list for both order and item information. Here we address the bases of the interference costs with a novel task in which, prior to each list presentation, participants are instructed to expect an item fragment completion test, an order reconstruction test, or either type of test. In five experiments, we contrast two competing accounts of item and order processing, the and the . An asymmetry with larger dual-attention costs on order compared to item tests was found, with the effect magnitude changing with task conditions. Our results support a version of the common resource hypothesis in which both item and order processing occur no matter which test is expected, but in which additional processing is divided between item and order codes in a manner that depends on task demands.
The pictures who shall not be named: Empirical support for benefits of preview in the Visual World Paradigm
A common critique of the Visual World Paradigm (VWP) in psycholinguistic studies is that what is designed as a measure of language processes is meaningfully altered by the visual context of the task. This is crucial, particularly in studies of spoken word recognition, where the displayed images are usually seen as just a part of the measure and are not of fundamental interest. Many variants of the VWP allow participants to sample the visual scene before a trial begins. However, this could bias their interpretations of the later speech or even lead to abnormal processing strategies (e.g., comparing the input to only preactivated working memory representations). Prior work has focused only on whether preview duration changes fixation patterns. However, preview could affect a number of processes, such as visual search, that would not challenge the interpretation of the VWP. The present study uses a series of targeted manipulations of the preview period to ask if preview alters looking behavior during a trial, and why. Results show that evidence of incremental processing and phonological competition seen in the VWP are not dependent on preview, and are not enhanced by manipulations that directly encourage phonological prenaming. Moreover, some forms of preview can eliminate nuisance variance deriving from object recognition and visual search demands in order to produce a more sensitive measure of linguistic processing. These results deepen our understanding of how the visual scene interacts with language processing to drive fixations patterns in the VWP, and reinforce the value of the VWP as a tool for measuring real-time language processing. Stimuli, data and analysis scripts are available at https://osf.io/b7q65/.
What Cognates Reveal about Default Language Selection in Bilingual Sentence Production
When producing connected speech, bilinguals often select a as the primary force driving the utterance. The present study investigated the cognitive mechanisms underlying default language selection. In three experiments, Spanish-English bilinguals named pictures out of context, or read aloud sentences with a single word replaced by a picture with a cognate (e.g., or noncognate name (e.g., Cognates speeded naming and significantly reduced switching costs. Critically, cognate effects were not modulated by sentence context. However, switch costs were larger in sentence context, which also exhibited significant language dominance effects, asymmetrical switch costs, and asymmetrical cognate facilitation effects, which were absent or symmetrical respectively in bare picture naming. These results suggest that default-language selection is driven primarily by boosting activation of the default language, not by proactive inhibition of the nondefault language. However, relaxation of proactive control in production of connected speech leads to greater reliance on reactive control to produce language switches relative to out-of-context naming, a contextually driven dynamic tradeoff in language control mechanisms.
What masked priming effects with abbreviations can tell us about abstract letter identities
Models of visual word recognition share the assumption that lexical access is based on abstract letter identities. The present study re-examined the assumption that this is because information about the visual form of the letter is lost early in the course of activating the abstract letter identities. The main support for this assumption has come from the case-independent masked priming effects. Experiment 1 used common English words presented in lowercase as targets in lexical decision, and replicated the oft-reported case-independent identity priming effect (e.g., edge-edge = EDGE-edge). In contrast, Experiment 2 using abbreviations (e.g., DNA, CIA) produced a robust case-dependent identity priming effect (e.g., DNA-DNA < dna-DNA). Experiment 3 used the same abbreviation stimuli as primes in a semantic priming lexical decision experiment. Here the prime case effect was absent, but so was the semantic priming effect (e.g., dna-GENETICS = DNA-GENETICS = LSD-GENETICS). The results question the view that information about the visual form of the letter is lost early. We offer an alternative perspective that the abstract nature of priming for common words stems from how these words are represented in the reader's lexicon. The implication of these findings for letter and word recognition is discussed. (197 words).
Word predictability effects are linear, not logarithmic: Implications for probabilistic models of sentence comprehension
During language comprehension, we routinely use information from the prior context to help identify the meaning of individual words. While measures of online processing difficulty, such as reading times, are strongly influenced by contextual predictability, there is disagreement about the mechanisms underlying this lexical predictability effect, with different models predicting different linking functions - (Reichle, Rayner & Pollatsek, 2003) or (Levy, 2008). To help resolve this debate, we conducted two highly-powered experiments (self-paced reading, N = 216; cross-modal picture naming, N = 36), and a meta-analysis of prior eye-tracking while reading studies (total N = 218). We observed a robust relationship between lexical predictability and word processing times across all three studies. Beyond their methodological implications, these findings also place important constraints on predictive processing models of language comprehension. In particular, these results directly contradict the empirical predictions of , while supporting a of lexical prediction effects in comprehension.
Sensorimotor and interoceptive dimensions in concrete and abstract concepts
Recent theories propose that abstract concepts, compared to concrete ones, might activate to a larger extent interoceptive, social and linguistic experiences. At the same time, recent research has underlined the importance of investigating how different sub-kinds of abstract concepts are represented. We report a pre-registered experiment, preceded by a pilot study, in which we asked participants to evaluate the difficulty of 3 kinds of concrete concepts (natural objects, tools, and food concepts) and abstract concepts (Philosophical and Spiritual concepts, PS, Physical Space Time and Quantity concepts, PSTQ, and Emotional, Mental State and Social concepts, EMSS). While rating the words, participants were assigned to different conditions designed to interfere with conceptual processing: they were required to squeeze a ball (hand motor system activation), to chew gum (mouth motor system activation), to self-estimate their heartbeats (interoception), and to perform a motor articulatory task (inner speech involvement). In a control condition they simply rated the difficulty of words. A possible interference should result in the increase of the difficulty ratings. Bayesian analyses reveal that, compared to concrete ones, abstract concepts are more grounded in interoceptive experience and concrete concepts less in linguistic experience (mouth motor system involvement), and that the experience on which different kinds of abstract and concrete concepts differs widely. For example, within abstract concepts interoception plays a major role for EMSS and PS concepts, while the ball squeezing condition interferes more for PSTQ concepts, confirming that PSTQ are the most concrete among abstract concepts, and tap into sensorimotor manual experience. Implications of the results for current theories of conceptual representation are discussed.
Rethinking Bilingual Enhancement Effects in Associative Learning of Foreign Language Vocabulary: The Role of Proficiency in the Mediating Language
The present study investigated claims that learning vocabulary in an unfamiliar language is more efficient in bilinguals than in monolinguals and the possible effects of language proficiency and dominance. In Experiment 1, monolingual ( = 48) and bilingual participants ( = 96) learned Japanese words paired with English translations and completed cued-recall and associative-recognition tests. Accuracy did not differ across monolingual and bilingual or language dominance groups. Nevertheless, in bilinguals, higher English proficiency was associated with higher accuracy. In Experiment 2, Japanese-English bilinguals ( = 40) learned Spanish-Japanese word pairs, and higher Japanese proficiency was associated with higher accuracy. Associative strategies were reported at a higher rate in bilingual than in monolingual participants but were not associated with more accurate performance. Careful comparisons of the present and previous results support the conclusion that higher proficiency in the language through which bilinguals learn foreign vocabulary enhances associative memory, but bilingualism itself does not.
Individual differences in learning the regularities between orthography, phonology and semantics predict early reading skills
Statistical views of literacy development maintain that proficient reading requires the assimilation of myriad statistical regularities present in the writing system. Indeed, previous studies have tied statistical learning (SL) abilities to reading skills, establishing the existence of a link between the two. However, some issues are currently left unanswered, including questions regarding the underlying bases for these associations as well as the types of statistical regularities actually assimilated by developing readers. Here we present an alternative approach to study the role of SL in literacy development, focusing on individual differences among beginning readers. Instead of using an artificial task to estimate SL abilities, our approach identifies individual differences in children's reliance on statistical regularities as reflected by actual reading behavior. We specifically focus on individuals' reliance on regularities in the mapping between print and speech versus associations between print and meaning in a word naming task. We present data from 399 children, showing that those whose oral naming performance is impacted more by print-speech regularities and less by associations between print and meaning have better reading skills. These findings suggest that a key route by which SL mechanisms impact developing reading abilities is via their role in the assimilation of sub-lexical regularities between printed and spoken language -and more generally, in detecting regularities that are more reliable than others. We discuss the implications of our findings to both SL and reading theories.
Priming Effects on Subsequent Episodic Memory: Testing Attentional Accounts
Prior work has shown that priming improves subsequent episodic memory, i.e., memory for the context in which an item is presented is improved if that item has been seen previously. We previously attributed this effect of "Priming on Subsequent Episodic Memory" (PSEM) to a sharpening of the perceptual/conceptual representation of an item, which improves its associability with an (arbitrary) background context, by virtue of increasing prediction error (Greve et al, 2017). However, an alternative explanation is that priming reduces the attentional resources needed to process an item, leaving more residual resources to encode its context. We report four experiments that tested this alternative, resource-based hypothesis, based on the assumption that reducing the available attentional resources by a concurrent load would reduce the size of the PSEM. In no experiment was there an interaction between attentional load and priming on mean memory performance, nor a consistent correlation across participants between priming and PSEM, failing to support the resource account. However, formal modelling revealed that a resource account is not, in fact, inconsistent with our data, by confirming that nonlinear (sigmoidal) resource-performance functions can reproduce any interaction with load, and, more strikingly, any pattern of correlation between priming and PSEM. This work reinforces not only the difficulty of refuting attentional resource accounts of memory encoding, but also questions the value of load manipulations more generally.
To catch a Snitch: Brain potentials reveal variability in the functional organization of (fictional) world knowledge during reading
We harnessed the temporal sensitivity of event-related brain potentials (ERPs) alongside individual differences in Harry Potter (HP) knowledge to investigate the extent to which the availability and timing of information relevant for real-time written word processing are influenced by variation in domain knowledge. We manipulated meaningful (category, event) relationships between sentence fragments about HP stories and their sentence final words. During word-by-word reading, N400 amplitudes to (a) linguistically supported and (b) unsupported but meaningfully related, but not to (c) unsupported, unrelated sentence endings varied with HP domain knowledge. Single-trial analyses revealed that only the N400s to linguistically supported (but not to either type of unsupported) sentence-final words varied as a function of whether individuals knew (or could remember) the correct (supported) ending for each HP "fact." We conclude that the quick availability of information relevant for word understanding in sentences is a function of individuals' knowledge of both specific facts and the domain to which the facts belong. During written sentence processing, as domain knowledge increases, it is clearly evident that individuals can make use of the relevant knowledge systematically organized around themes, events, and categories in that domain, to the extent they have it.
Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study
Cue-based retrieval theories in sentence processing predict two classes of interference effect: (i) is predicted when multiple items match a retrieval cue: cue-overloading leads to an overall slowdown in reading time; and (ii) arises when a retrieval target as well as a distractor only partially match the retrieval cues; this partial matching leads to an overall speedup in retrieval time. Inhibitory interference effects are widely observed, but facilitatory interference apparently has an exception: reflexives have been claimed to show no facilitatory interference effects. Because the claim is based on underpowered studies, we conducted a large-sample experiment that investigated both facilitatory and inhibitory interference. In contrast to previous studies, we find facilitatory interference effects in reflexives. We also present a quantitative evaluation of the cue-based retrieval model of Engelmann et al. (2019), with respect to the reflexives data. Data and code are available from: https://osf.io/reavs/.
Opacity, Transparency, and Morphological Priming: A Study of Prefixed Verbs in Dutch
A basic question for the study of the mental lexicon is whether there are morphological representations and processes that are independent of phonology and semantics. According to a prominent tradition, morphological relatedness requires semantic transparency: semantically words are related in meaning to their stems, while semantically words are not. This study examines the question of morphological relatedness using intra-modal auditory priming by Dutch prefixed verbs. The key conditions involve semantically transparent prefixed primes (e.g., 'offer', with the stem , also 'offer') and opaque primes (e.g., 'forbid'). Results show robust facilitation for both transparent and opaque pairs; phonological (Experiment 1) and semantic (Experiment 2) controls rule out the possibility that these other types of relatedness are responsible for the observed priming effects. The finding of facilitation with opaque primes suggests that morphological processing is independent of semantic and phonological representations. Accordingly, the results are incompatible with theories that make semantic overlap a necessary condition for relatedness, and favor theories in which words may be related in ways that do not require shared meaning. The general discussion considers several specific proposals along these lines, and compares and contrasts questions about morphological relatedness of the type found here with the different but related question of whether there is morphological decomposition of complex forms or not.
Repeat After Us: Syntactic Alignment is Not Partner-Specific
Conversational partners match each other's speech, a process known as . Such alignment can be , when speakers match particular partners' production distributions, or , when speakers match aggregated linguistic statistics across their input. However, partner-specificity has only been assessed in situations where it had clear communicative utility, and non-alignment might cause communicative difficulty. Here, we investigate whether speakers align partner-specifically even without a communicative need, and thus whether the mechanism driving alignment is sensitive to communicative and social factors of the linguistic context. In five experiments, participants interacted with two experimenters, each with unique and systematic syntactic preferences (e.g., Experimenter A only produced double object datives and Experimenter B only produced prepositional datives). Across multiple exposure conditions, participants engaged in partner-independent but not partner-specific alignment. Thus, when partner-specificity does not add communicative utility, speakers align to aggregate, partner-independent statistical distributions, supporting a communicatively-modulated mechanism underlying alignment.
Individual differences in subphonemic sensitivity and phonological skills
Many studies have established a link between phonological abilities (indexed by phonological awareness and phonological memory tasks) and typical and atypical reading development. Individuals who perform poorly on phonological assessments have been mostly assumed to have (or "fuzzy") phonological representations, with typical phonemic categories, but with greater category overlap due to imprecise encoding. An alternative posits that poor readers have phonological representations, with speech sounds perceived allophonically (phonetically distinct variants of a single phonemic category). On both accounts, mismatch between phonological categories and orthography leads to reading difficulty. Here, we consider the implications of these accounts for online speech processing. We used eye tracking and an individual differences approach to assess sensitivity to subphonemic detail in a community sample of young adults with a wide range of reading-related skills. Subphonemic sensitivity inversely correlated with meta-phonological task performance, consistent with overspecification.
Syntactic Entrainment: The Repetition of Syntactic Structures in Event Descriptions
Syntactic structures can convey certain (subtle) emergent properties of events. For example, the double-object dative ("the doctor is giving a patient pills") can convey the successful transfer of possession, whereas its syntactic alternative, the prepositional dative ("the doctor is giving pills to a patient"), conveys just a transfer to a location. Four experiments explore how syntactic structures may become associated with particular semantic content - such as these emergent properties of events. Experiment 1 provides evidence that speakers form associations between syntactic structures and particular event depictions. Experiment 2 shows that these associations also hold for different depictions of the same events. Experiments 3 and 4 implicate representations of the semantic features of events in these associations. Taken together, these results reveal an effect we term that is well positioned to reflect the recalibration of the strength of the mappings or associations that allow syntactic structures to convey emergent properties of events.
Mapping non-native pitch contours to meaning: Perceptual and experiential factors
Infants show interesting patterns of flexibility and constraint early in word learning. Here, we explore perceptual and experiential factors that drive associative learning of labels that differ in pitch contour. Contrary to the salience hypothesis proposed in Experiment 1, English-learning 14-month-olds failed to map acoustically distinctive level and dipping labels to novel referents, even though they discriminated the labels when no potential referents were present. Conversely, infants readily mapped the less distinctive rising and dipping labels. In Experiment 2, we found that the degree of pitch variation in labels also does not account for learning. Instead, English-learning infants only learned if one of the labels had a rising pitch contour. We argue that experience with hearing and/or producing native language prosody may lead infants to initially over-interpret the role rising pitch plays in differentiating words. Together, our findings suggest that multiple factors contribute to whether specific acoustic forms will function as candidate object labels.
Detecting when timeseries differ: Using the Bootstrapped Differences of Timeseries (BDOTS) to analyze Visual World Paradigm data (and more)
In the last decades, major advances in the language sciences have been built on real-time measures of language and cognitive processing, measures like mouse-tracking, event related potentials and eye-tracking in the visual world paradigm. These measures yield densely sampled timeseries that can be highly revealing of the dynamics of cognitive processing. However, despite these methodological advances, existing statistical approaches for timeseries analyses have often lagged behind. Here, we present a new statistical approach, the Bootstrapped Differences of Timeseries (BDOTS), that can estimate the precise timewindow at which two timeseries differ. BDOTS makes minimal assumptions about the error distribution, uses a custom family-wise error correction, and can flexibly be adapted to a variety of applications. This manuscript presents the theoretical basis of this approach, describes implementational issues (in the associated R package), and illustrates this technique with an analysis of an existing dataset. Pitfalls and hazards are also discussed, along with suggestions for reporting in the literature.
Individual differences in syntactic processing: Is there evidence for reader-text interactions?
There remains little consensus about whether there exist meaningful individual differences in syntactic processing and, if so, what explains them. We argue that this partially reflects the fact that few psycholinguistic studies of individual differences include multiple constructs, multiple measures per construct, or tests for reliable measures. Here, we replicated three major syntactic phenomena in the psycholinguistic literature: use of verb distributional statistics, difficulty of object-versus subject-extracted relative clauses, and resolution of relative clause attachment ambiguities. We examine whether any individual differences in these phenomena could be predicted by language experience or general cognitive abilities (phonological ability, verbal working memory capacity, inhibitory control, perceptual speed). We find correlations between individual differences and offline, but not online, syntactic phenomena. Condition effects on reading time were not consistent , limiting their ability to correlate with other measures. We suggest that this might explain controversy over individual differences in language processing.
Adults with Poor Reading Skills, Older Adults, and College Students: the Meanings They Understand During Reading Using a Diffusion Model Analysis
When a word is read in a text, the aspects of its meanings that are encoded should be those relevant to the text and not those that are irrelevant. We tested whether older adults, college students, and adults with poor literacy skills accomplish contextually relevant encoding. Participants read short stories, which were followed by true/false test sentences. Among these were sentences that matched the relevant meaning of a word in a story and sentences that matched a different meaning. We measured the speed and accuracy of responses to the test sentences and used a decision model to separate the information that a reader encodes from the reader's speed/accuracy tradeoff settings. We found that all three groups encoded meanings as contextually relevant. The findings illustrate how a decision-making model combined with tests of particular comprehension processes can lead to further understanding of reading skills.
Language unifies relational coding: The roles of label acquisition and accessibility in making flexible relational judgments
Language is likely structuring spatial judgments, but how it achieves this is not clear. We examined the development of relative, spatial judgments across verbal and nonverbal tasks of and in children between the ages of 5 and 10 years. We found that the verbal ability to make judgments preceded verbal / judgments and all nonverbal judgments. We also found that only when the labels were - as opposed to only having been acquired - did children's nonverbal performance improve. Our findings further indicate that accessing the correct term was not needed for enhanced performance. The results suggest that accessing language unifies different instantiations of a relation into a single representation.
The company objects keep: Linking referents together during cross-situational word learning
Learning the meanings of words involves not only linking individual words to referents but also building a network of connections among entities in the world, concepts, and words. Previous studies reveal that infants and adults track the statistical co-occurrence of labels and objects across multiple ambiguous training instances to learn words. However, it is less clear whether, given distributional or attentional cues, learners also encode associations amongst the novel objects. We investigated the consequences of two types of cues that highlighted object-object links in a cross-situational word learning task: distributional structure - how frequently the referents of novel words occurred together - and visual context - whether the referents were seen on matching backgrounds. Across three experiments, we found that in addition to learning novel words, adults formed connections between frequently co-occurring objects. These findings indicate that learners exploit statistical regularities to form multiple types of associations during word learning.