Longitudinal trajectories of the neural encoding mechanisms of speech-sound features during the first year of life
Infants quickly recognize the sounds of their mother language, perceiving the spectrotemporal acoustic features of speech. However, the underlying neural machinery remains unclear. We used an auditory evoked potential termed frequency-following response (FFR) to unravel the neural encoding maturation for two speech sound characteristics: voice pitch and temporal fine structure. 37 healthy-term neonates were tested at birth and retested at the ages of six and twelve months. Results revealed a reduction in neural phase-locking onset to the stimulus envelope from birth to six months, stabilizing by twelve months. While neural encoding of voice pitch remained consistent across ages, temporal fine structure encoding matured rapidly from birth to six months, without further improvement from six to twelve months. Results highlight the critical importance of the first six months of life in the maturation of neural encoding mechanisms that are crucial for phoneme discrimination during early language acquisition.
Language proficiency is associated with neural representational dimensionality of semantic concepts
Previous studies suggest that semantic concepts are characterized by high-dimensional neural representations and that language proficiency affects semantic processing. However, it is not clear whether language proficiency modulates the dimensional representations of semantic concepts at the neural level. To address this question, the present study adopted principal component analysis (PCA) and representational similarity analysis (RSA) to examine the differences in representational dimensionalities (RDs) and in semantic representations between words in highly proficient (Chinese) and less proficient (English) language. PCA results revealed that language proficiency increased the dimensions of lexical representations in the left inferior frontal gyrus, temporal pole, inferior temporal gyrus, supramarginal gyrus, angular gyrus, and fusiform gyrus. RSA results further showed that these regions represented semantic information and that higher semantic representations were observed in highly proficient language relative to less proficient language. These results suggest that language proficiency is associated with the neural representational dimensionality of semantic concepts.
Delineating Region-Specific contributions and connectivity patterns for semantic association and categorization through ROI and Granger causality analysis
The neural mechanisms supporting semantic association and categorization are examined in this study. Semantic association involves linking concepts through shared themes, events, or scenes, while semantic categorization organizes meanings hierarchically based on defining features. Twenty-three adults participated in an fMRI study performing categorization and association judgment tasks. Results showed stronger activation in the inferior frontal gyrus during association and marginally weaker activation in the posterior middle temporal gyrus (pMTG) during categorization. Granger causality analysis revealed bottom-up connectivity from the visual cortex to the hippocampus during semantic association, whereas semantic categorization exhibited strong reciprocal connections between the pMTG and frontal semantic control regions, together with information flow from the visual association area and hippocampus to the pars triangularis. We propose that demands on semantic retrieval, precision of semantic representation, perceptual experiences and world knowledge result in observable differences between these two semantic relations.
Subject relative clause preference in Basque: ERP evidence
Subject-object processing within relative clause (RC) attachments exhibits cross-linguistic asymmetries influenced by various factors, including filler-gap linear or structural distance, morphological case marking, and subject-first preferences (Lau & Tanaka, 2021). In the Basque language, filler-gap linear distance and morphological case marking have been posited as explanatory factors for the observed object relative clause (ORC) preference in prenominal RCs (Carreiras et al., 2010). However, recent studies by Yetano et al., (2019) have identified a behavioral preference for subject relative clause (SRC) constructions in Basque postnominal RCs. To ascertain the primary determinant impacting RC processing, we employed EEG signatures to scrutinize subject-object preferences in temporally ambiguous Basque postnominal RCs. Analysis of event-related potentials (ERPs) unveiled a SRC preference: ORCs elicited augmented negative (LAN: 200-400 ms) and positive (P600: 700-900 ms) components compared to SRCs. Our findings suggest that preferences in RC disambiguation are predominantly shaped by filler-gap linear distance and/or subject-first bias.
Mapping the basal temporal language network: a SEEG functional connectivity study
The Basal Temporal Language Area (BTLA) is recognized in epilepsy surgery setting when cortical electrical stimulation (CES) of the ventral temporal cortex (VTC) trigger anomia or paraphasia during naming tasks. Despite acknowledging a ventral language stream, current cognitive language models fail to properly integrate this entity. In this SEEG study we used cortico-cortical evoked potentials in nine epileptic patients to assess and compare the effective connectivity of 73 sites in the left VTC of which 26 were deemed eloquent for naming after CES (BTLA). Eloquent sites connectivity supports the existence of a basal temporal language network (BTLN) structured around posterior projectors while the fusiform gyrus behaved as an integrator. BTLN was strongly connected to the amygdala and hippocampus unlike the non-eloquent sites, except for the anterior fusiform gyrus (FG). These observations support the FG as a multimodal functional hub and add to our understanding of ventral temporal language processing.
Word and morpheme frequency effects in naming Mandarin Chinese compounds: More than a replication
The question whether compound words are stored in our mental lexicon in a decomposed or full-listing way prompted Janssen and colleagues (2008) to investigate the representation of compounds using word and morpheme frequencies manipulations. Our study replicated their study using a new set of stimuli from a spoken corpus and incorporating EEG data for a more detailed investigation. In the current study, despite ERP analyses revealing no word frequency or morpheme frequency effects across conditions, behavioral outcomes indicated that Mandarin compounds are not sensitive to word frequency. Instead, the response times highlighted a morpheme frequency effect in naming Mandarin compounds, which contrasted with the findings of Janssen and colleagues. These findings challenge the full-listing model and instead support the decompositional model.
Lateralization of activation within the superior temporal gyrus during speech perception in sleeping infants is associated with subsequent language skills in kindergarten: A passive listening task-fMRI study
Brain asymmetries are hypothesized to reduce functional duplication and thus have evolutionary advantages. The goal of this study was to examine whether early brain lateralization contributes to skill development within the speech-language domain. To achieve this goal, 25 infants (2-13 months old) underwent behavioral language examination and fMRI during sleep while listening to forward and backward speech, and then were assessed on various language skills at 55-69 months old. We observed that infant functional lateralization of the superior temporal gyrus (STG) for forward > backward speech was associated with phonological, vocabulary, and expressive language skills 4 to 5 years later. However, we failed to observe that infant language skills or the anatomical lateralization of STG were related to subsequent language skills. Overall, our findings suggest that infant functional lateralization of STG for speech perception may scaffold subsequent language acquisition, supporting the hypothesis that functional hemisphere asymmetries are advantageous.
Revisiting nonword repetition as a clinical marker of developmental language disorder: Evidence from monolingual and bilingual L2 Cantonese
Cross-linguistically, nonword repetition (NWR) tasks have been found to differentiate between typically developing (TD) children and those with Developmental Language Disorder (DLD), even when second-language TD (L2-TD) children are considered. This study examined such group differences in Cantonese. Fifty-seven age-matched children (19 monolingual DLD (MonDLD); 19 monolingual TD (MonTD); and 19 L2-TD) repeated language-specific nonwords with varying lexicality levels and Cantonese-adapted quasi-universal nonwords. At whole-nonword level scoring, on the language-specific, High-Lexicality nonwords, MonDLD scored significantly below MonTD and L2-TD groups which did not differ significantly from each other. At syllable-level scoring, the same pattern of group differentiation was found on quasi-universal nonwords. These findings provide evidence from a typologically distinct and understudied language that NWR tasks can capture significant TD/DLD group differences, even for L2-Cantonese TD children with reduced language experience. Future studies should compare the performance of an L2-DLD group and evaluate the sensitivity and specificity of Cantonese NWR.
Temporary ambiguity and memory for the context of spoken language in adults with moderate-severe traumatic brain injury
Language is processed incrementally, with addressees considering multiple candidate interpretations as speech unfolds, supporting the retention of these candidate interpretations in memory. For example, after interpreting the utterance, "Click on the striped bag", listeners exhibit better memory for non-mentioned items in the context that were temporarily consistent with what was said (e.g., dotted bag), vs. not consistent (e.g., dotted tie), reflecting the encoding of linguistic context in memory. Here, we examine the impact of moderate-severe traumatic brain injury (TBI) on memory for the contexts of language use. Participants with moderate-severe TBI (N=71) and non-injured comparison participants (NC, N=85) interpreted temporarily ambiguous utterances in rich contexts. A subsequent memory test demonstrated that participants with TBI exhibited impaired memory for context items and an attenuated memory advantage for mentioned items compared to NC participants. Nonetheless, participants with TBI showed similar, although attenuated, patterns in memory for temporarily-activated items as NC participants.
Is frontal EEG gamma power a neural correlate of language in toddlerhood? An examination of late talking and expressive language ability
Few studies have examined neural correlates of late talking in toddlers, which could aid in understanding etiology and improving diagnosis of developmental language disorder (DLD). Greater frontal gamma activity has been linked to better language skills, but findings vary by risk for developmental disorders, and this has not been investigated in late talkers. This study examined whether frontal gamma power (30-50 Hz), from baseline-state electroencephalography (EEG), was related to DLD risk (categorical late talking status) and a continuous measure of expressive language in n = 124 toddlers. Frontal gamma power was significantly associated with late talker status when controlling for demographic factors and concurrent receptive language (β = 1.96, McFadden's Pseudo R = 0.21). Demographic factors and receptive language did not significantly moderate the association between frontal gamma power and late talker status. A continuous measure of expressive language ability was not significantly associated with gamma (r = -0.07). Findings suggest that frontal gamma power may be useful in discriminating between groups of children that differ in DLD risk, but not for expressive language along a continuous spectrum of ability.
Subcortical volume and language proficiency in bilinguals and monolinguals: A structural MRI study
The current study focused on an understudied but most prominent bilingual population in the U.S. - heritage bilinguals. The current study combined data from eight MRI studies to examine the relationship between language experience and subcortical gray matter volume in 215 heritage Spanish-English bilinguals and 145 English monolinguals, within and between groups. For bilinguals, higher Spanish (L1) proficiency was related to less volume in the bilateral globus pallidus, and higher English (L2) proficiency and earlier English AoA were related to greater volume in the right thalamus, left accumbens, and bilateral globus pallidus. For monolinguals, higher English proficiency was associated with greater volume only in the right pallidum. These results suggest that subcortical gray matter structures are related to the learning of a second language. Future research is encouraged to understand subcortical adaptation in relation to L1 and L2 acquisition from a developmental perspective.
Neural changes in sign language vocabulary learning: Tracking lexical integration with ERP measures
The present study aimed to investigate the neural changes related to the early stages of sign language vocabulary learning. Hearing non-signers were exposed to Catalan Sign Language (LSC) signs in three laboratory learning sessions over the course of a week. Participants completed two priming tasks designed to examine learning-related neural changes by means of N400 responses. In a semantic decision task, participants evaluated whether written Catalan word pairs were semantically related or not. The experimental manipulation included prime-target phonological overlap (or not) of the corresponding LSC sign translations. In a LSC primed lexical decision task, participants saw pairs of signs and had to determine if the targets were real LSC signs or not. The experimental design included pairs of signs that were semantically related or unrelated. The results of the LSC lexical decision task showed N400 lexicality and semantic priming effects in the third session. Also in the third session, N400 effects related to the activation of LSC phonology were observed during word processing in the semantic decision task. Overall, our findings suggest rapid neural changes occurring during the initial stages of intensive sign language vocabulary training. The results are discussed in relation to the temporality of lexicality and semantic effects, as well as their potential relation to linguistic features of sign languages.
Transcranial direct stimulation over left inferior frontal gyrus improves language production and comprehension in post-stroke aphasia: A double-blind randomized controlled study
Transcranial direct current stimulation (tDCS) targeting Broca's area has shown promise for augmenting language production in post-stroke aphasia (PSA). However, previous research has been limited by small sample sizes and inconsistent outcomes. This study employed a double-blind, parallel, randomized, controlled design to evaluate the efficacy of anodal Broca's tDCS, paired with 20-minute speech and language therapy (SLT) focused primarily on expressive language, across 5 daily sessions in 45 chronic PSA patients. Utilizing the Western Aphasia Battery-Revised, which assesses a spectrum of linguistic abilities, we measured changes in both expressive and receptive language skills before and after intervention. The tDCS group demonstrated significant improvements over sham in aphasia quotient, auditory verbal comprehension, and spontaneous speech. Notably, tDCS improved both expressive and receptive domains, whereas sham only benefited expression. These results underscore the broader linguistic benefits of Broca's area stimulation and support the integration of tDCS with SLT to advance aphasia rehabilitation.
Cross-linguistic and acoustic-driven effects on multiscale neural synchrony to stress rhythms
We investigated how neural oscillations code the hierarchical nature of stress rhythms in speech and how stress processing varies with language experience. By measuring phase synchrony of multilevel EEG-acoustic tracking and intra-brain cross-frequency coupling, we show the encoding of stress involves different neural signatures (delta rhythms = stress foot rate; theta rhythms = syllable rate), is stronger for amplitude vs. duration stress cues, and induces nested delta-theta coherence mirroring the stress-syllable hierarchy in speech. Only native English, but not Mandarin, speakers exhibited enhanced neural entrainment at central stress (2 Hz) and syllable (4 Hz) rates intrinsic to natural English. English individuals with superior cortical-stress tracking capabilities also displayed stronger neural hierarchical coherence, highlighting a nuanced interplay between internal nesting of brain rhythms and external entrainment rooted in language-specific speech rhythms. Our cross-language findings reveal brain-speech synchronization is not purely a "bottom-up" but benefits from "top-down" processing from listeners' language-specific experience.
Native language background affects the perception of duration and pitch
Estonian is a quantity language with both a primary duration cue and a secondary pitch cue, whereas Chinese is a tonal language with a dominant pitch use. Using a mismatch negativity experiment and a behavioral discrimination experiment, we investigated how native language background affects the perception of duration only, pitch only, and duration plus pitch information. Chinese participants perceived duration in Estonian as meaningless acoustic information due to a lack of phonological use of duration in their native language; however, they demonstrated a better pitch discrimination ability than Estonian participants. On the other hand, Estonian participants outperformed Chinese participants in perceiving the non-speech pure tones that resembled the Estonian quantity (i.e., containing both duration and pitch information). Our results indicate that native language background affects the perception of duration and pitch and that such an effect is not specific to processing speech sounds.
The bidirectional influence between emotional language and inhibitory control in Chinese: An ERP study
The bidirectional influence between emotional language and inhibitory processes has been studied in alphabetic languages, highlighting the need for additional investigation in nonalphabetic languages to explore potential cross-linguistic differences. The present ERP study investigated the bidirectional influence in the context of Mandarin, a language with unique linguistic features and neural substrates. In Experiment 1, emotional adjectives preceded the Go/NoGo cue. The ERPs revealed that negative emotional language facilitated inhibitory control. In Experiment 2, with a Go/NoGo cue preceding the emotional language, the study confirmed that inhibitory control facilitated the semantic integration of negative language in Chinese, whereas the inhibited state may not affect deeper refinement of the emotional content. However, no interaction was observed in positive emotional language processing. These results suggest an interaction between inhibitory control and negative emotional language processing in Chinese, supporting the integrative emotion-cognition view.
Transcranial photobiomodulation on the left inferior frontal gyrus enhances Mandarin Chinese L1 and L2 complex sentence processing performances
This study investigated the causal enhancing effect of transcranial photobiomodulation (tPBM) over the left inferior frontal gyrus (LIFG) on syntactically complex Mandarin Chinese first language (L1) and second language (L2) sentence processing performances. Two (L1 and L2) groups of participants (thirty per group) were recruited to receive the double-blind, sham-controlled tPBM intervention via LIFG, followed by the sentence processing, the verbal working memory (WM), and the visual WM tasks. Results revealed a consistent pattern for both groups: (a) tPBM enhanced sentence processing performance but not verbal WM for linear processing of unstructured sequences and visual WM performances; (b) Participants with lower sentence processing performances under sham tPBM benefited more from active tPBM. Taken together, the current study substantiated that tPBM enhanced L1 and L2 sentence processing, and would serve as a promising and cost-effective noninvasive brain stimulation (NIBS) tool for future applications on upregulating the human language faculty.
Neural underpinnings of sentence reading in deaf, native sign language users
The goal of this study was to investigate sentence-level reading circuits in deaf native signers, a unique group of deaf people who are immersed in a fully accessible linguistic environment from birth, and hearing readers. Task-based fMRI, functional connectivity and lateralization analyses were conducted. Both groups exhibited overlapping brain activity in the left-hemispheric perisylvian regions in response to a semantic sentence task. We found increased activity in left occipitotemporal and right frontal and temporal regions in deaf readers. Lateralization analyses did not confirm more rightward asymmetry in deaf individuals. Deaf readers exhibited weaker functional connectivity between inferior frontal and middle temporal gyri and enhanced coupling between temporal and insular cortex. In conclusion, despite the shared functional activity within the semantic reading network across both groups, our results suggest greater reliance on cognitive control processes for deaf readers, possibly resulting in greater effort required to perform the task in this group.
Language and communication functioning in children and adolescents with agenesis of the corpus callosum
The corpus callosum, the largest white matter inter-hemispheric pathway, is involved in language and communication. In a cohort of 15 children and adolescents (8-15 years) with developmental absence of the corpus callosum (AgCC), this study aimed to describe language and everyday communication functioning, and explored the role of anatomical factors, social risk, and non-verbal IQ in these outcomes. Standardised measures of language and everyday communication functioning, intellectual ability and social risk were used. AgCC classification and anterior commissure volume, a potential alternative pathway, were extracted from T1-weighted images. Participants with AgCC showed reduced receptive and expressive language compared with test norms, and high rates of language and communication impairments. Complete AgCC, higher social risk and lower non-verbal IQ were associated with communication difficulties. Anterior commissure volume was not associated with language and communication. Recognising heterogeneity in language and communication functioning enhances our understanding and suggests specific focuses for potential interventions.
Individual differences in visual pattern completion predict adaptation to degraded speech
Recognizing acoustically degraded speech relies on predictive processing whereby incomplete auditory cues are mapped to stored linguistic representations via pattern recognition processes. While listeners vary in their ability to recognize degraded speech, performance improves when a written transcription is presented, allowing completion of the partial sensory pattern to preexisting representations. Building on work characterizing predictive processing as pattern completion, we examined the relationship between domain-general pattern recognition and individual variation in degraded speech learning. Participants completed a visual pattern recognition task to measure individual-level tendency towards pattern completion. Participants were also trained to recognize noise-vocoded speech with written transcriptions and tested on speech recognition pre- and post-training using a retrieval-based transcription task. Listeners significantly improved in recognizing speech after training, and pattern completion on the visual task predicted improvement for novel items. The results implicate pattern completion as a domain-general learning mechanism that can facilitate speech adaptation in challenging contexts.
Native and non-native parsing of adjective placement - An ERP study of Mandarin and English sentence processing
Adjectives in English and Mandarin are typically prenominal, but the corresponding grammatical rules vary in subtle ways. Our event-related potential (ERP) study shows that native speakers of both languages rely on similar processing mechanisms when reading sentences with anomalous noun-adjective order (e.g., the vase *white) in their first language, reflected by a biphasic N400-P600 profile. Only Mandarin native speakers showed an additional N400 on grammatical adjectives (e.g., the white vase), potentially due to atypical word-by-word presentation of lexicalized compounds. English native speakers with advanced Mandarin proficiency were tested in both languages. They processed ungrammatical noun-adjective pairs in English like English monolinguals (N400-P600), but only exhibited an N400 in Mandarin. The absent P600 effect corresponded to their (surprisingly) low proficiency with noun-adjective violations in Mandarin, questioning simple rule transfer from English grammar.
Neural oscillations during predictive sentence processing in young children
The neural correlates of predictive processing in language, critical for efficient sentence comprehension, is well documented in adults. Specifically, adults exhibit alpha power (9-12 Hz) suppression when processing high versus low predictability sentences. This study explores whether young children exhibit similar neural mechanisms. We analyzed EEG data from 29 children aged 3-5 years listening to sentences of varying predictability. Our results revealed significant neural oscillation differences in the 5-12 Hz range between high and low predictability sentences, similar to adult patterns. Crucially, the degree of these differences correlated with children's language abilities. These findings are the first to demonstrate the neural basis of predictive processing in young children and its association with language development.
An electrophysiological investigation of referential communication
A key aspect of linguistic communication involves semantic reference to objects. Presently, we investigate neural responses at objects when reference is disrupted, e.g., "The connoisseur tasted *that wine"… vs. "…*that roof…" Without any previous linguistic context or visual gesture, use of the demonstrative determiner "that" renders interpretation at the noun as incoherent. This incoherence is not based on knowledge of how the world plausibly works but instead is based on grammatical rules of reference. Whereas Event-Related Potential (ERP) responses to sentences such as "The connoisseur tasted the wine …" vs. "the roof" would result in an N400 effect, it is unclear what to expect for doubly incoherent "…*that roof…". Results revealed an N400 effect, as expected, preceded by a P200 component (instead of predicted P600 effect). These independent ERP components at the doubly violated condition support the notion that semantic interpretation can be partitioned into grammatical vs. contextual constructs.
ERP evidence for cross-domain prosodic priming from music to speech
Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.
Production of relative clauses in Cantonese-speaking children with and without Developmental Language Disorder
Developmental Language Disorder (DLD) has been explained as either a deficit deriving from an abstract representational deficit or as emerging from difficulties in acquiring and coordinating multiple interacting cues guiding learning. These competing explanations are often difficult to decide between when tested on European languages. This paper reports an experimental study of relative clause (RC) production in Cantonese-speaking children with and without DLD, which enabled us to test multiple developmental predictions derived from one prominent theory - emergentism. Children with DLD (N = 22; aged 6;6-9;7) were compared with age-matched typically-developing peers (N = 23) and language-matched, typically-developing children (N = 21; aged 4;7-7;6) on a sentence repetition task. Results showed that children's production across multiple RC types was influenced by structural frequency, general semantic complexity, and the linear order of constituents, with the DLD group performing worse than their age-matched and language-matched peers. The results are consistent with the emergentist explanation of DLD.
Evidence for planning and motor subtypes of stuttering based on resting state functional connectivity
We tested the hypothesis, generated from the Gradient Order Directions Into Velocities of Articulators (GODIVA) model, that adults who stutter (AWS) may comprise subtypes based on differing connectivity within the cortico-basal ganglia planning or motor loop. Resting state functional connectivity from 91 AWS and 79 controls was measured for all GODIVA model connections. Based on a principal components analysis, two connections accounted for most of the connectivity variability in AWS: left thalamus - left posterior inferior frontal sulcus (planning loop component) and left supplementary motor area - left ventral premotor cortex (motor loop component). A k-means clustering algorithm using the two connections revealed three clusters of AWS. Cluster 1 was significantly different from controls in both connections; Cluster 2 was significantly different in only the planning loop; and Cluster 3 was significantly different in only the motor loop. These findings suggest the presence of planning and motor subtypes of stuttering.
Geometry in the brain optimized for sign language - A unique role of the anterior superior parietal lobule in deaf signers
Geometry has been identified as a cognitive domain where deaf individuals exhibit relative strength, yet the neural mechanisms underlying geometry processing in this population remain poorly understood. This fMRI study aimed to investigate the neural correlates of geometry processing in deaf and hearing individuals. Twenty-two adult deaf signers and 25 hearing non-signers completed a geometry decision task. We found no group differences in performance, while there were some differences in parietal activation. As expected, the posterior superior parietal lobule (SPL) was recruited for both groups. The anterior SPL was significantly more activated in the deaf group, and the inferior parietal lobule was significantly more deactivated in the hearing group. In conclusion, despite similar performance across groups, there were differences in the recruitment of parietal regions. These differences may reflect inherent differences in brain organization due to different early sensory and linguistic experiences.
Original language versus dubbed movies: Effects on our brain and emotions
Converging evidence suggests that emotions are often dulled in one's foreign language. Here, we paired fMRI with a naturalistic viewing paradigm (i.e., original vs. dubbed versions of sad, fun and neutral movie clips) to investigate the neural correlates of emotion perception as a function of native (L1) and foreign (L2) language context. Watching emotional clips in L1 (vs. L2) reflected in activations of anterior temporal cortices involved in semantic cognition, arguably indicating a closer association of emotion concepts with the native language. The processing of fun clips in L1 (vs. L2) reflected in enhanced response of the right amygdala, suggesting a deeper emotional experience of positively valenced stimuli in the L1. Of interest, the amygdala response to fun clips positively correlated with participants' proficiency in the L2, indicating that a higher L2 competence may reduce emotional processing differences across a bilingual's two languages. Our findings are compatible with the view that language provides a context for the construction of emotions.
Brain representations of lexical ambiguity: Disentangling homonymy, polysemy, and their meanings
In human languages, it is a common phenomenon for a single word to have multiple meanings. This study used fMRI to investigate how the brain processed different types of lexical ambiguity, and how it differentiated the meanings of ambiguous words. We focused on homonyms and polysemy that differed in the relatedness among multiple meanings. Participants (N = 35) performed a prime-target semantic relatedness task, where a specific meaning of an ambiguous word was primed. Results showed that homonyms elicited greater activation in bilateral dorsal prefrontal and posterior parietal cortices than polysemous words, suggesting that these regions may be more engaged in cognitive control when the meanings of ambiguous words are unrelated. Multivariate pattern analysis further revealed that meanings of homonyms with different syntactic categories were represented differently in the frontal and temporal cortices. The findings highlighted the importance of semantic relations and grammatical factors in the brain's representation of lexical ambiguities.
The advantage of the music-enabled brain in accommodating lexical tone variabilities
The perception of multiple-speaker speech is challenging. People with music training generally show more robust and faster tone perception. The present study investigated whether music training experience can facilitate tonal-language speakers to accommodate speech variability in lexical tones. Native Cantonese musicians and nonmusicians were asked to identify Cantonese level tones from multiple speakers. Two groups were equally well in using context cues to normalize lexical tone variability at behavioral level. However, the advantage of music training was observed at cortical level. The time-domain ERP analysis suggested that musicians normalized lexical tone variability much earlier than nonmusicians (N1: 70-175 ms vs. P2: 175-280 ms). An exploratory source analysis further revealed that two groups probably relied on different cortical regions to normalize lexical tones. Left BA41 showed stronger involvement in musicians in accommodating tone variability, but right auditory cortex (including BA 41, 42 and 22) activated to a greater extend in nonmusicians.