Despite an impressive psycholinguistic effort to explore the way in which two or more languages are represented and controlled, controversy surrounds both issues. We argue that problems of representation and control are intimately connected and we propose that data from functional neuroimaging may advance a resolution. Neuroimaging data, we argue, support the notion that the neural representation of a second language converges with the representation of that language learned as a first language and that language production in bilinguals is a dynamic process involving cortical and subcortical structures that make use of inhibition to resolve lexical competition and to select the intended language.
This article describes a computational model, called DIVA, that provides a quantitative framework for understanding the roles of various brain regions involved in speech acquisition and production. An overview of the DIVA model is first provided, along with descriptions of the computations performed in the different brain regions represented in the model. Particular focus is given to the model's , which provides a link between the sensory representation of a speech sound and the motor program for that sound. Neurons in this map share with “mirror neurons” described in monkey ventral premotor cortex the key property of being active during both production and perception of specific motor actions. As the DIVA model is defined both computationally and anatomically, it is ideal for generating precise predictions concerning speech-related brain activation patterns observed during functional imaging experiments. The DIVA model thus provides a well-defined framework for guiding the interpretation of experimental results related to the putative human speech mirror system.
It is an old-standing debate in the field of speech communication to determine whether speech perception involves auditory or multisensory representations and processing, independently on any procedural knowledge about the production of speech units or on the contrary if it is based on a recoding of the sensory input in terms of articulatory gestures, as posited in the Motor Theory of Speech Perception. The discovery of mirror neurons in the last 15 years has strongly renewed the interest for motor theories. However, while these neurophysiological data clearly reinforce the plausibility of the role of motor properties in perception, it could lead in our view to incorrectly de-emphasise the role of , crucial in speech communication. The so-called Perception-for-Action-Control Theory (PACT) aims at defining a theoretical framework connecting in a principled way perceptual shaping and motor procedural knowledge in speech multisensory processing in the human brain. In this paper, the theory is presented in details. It is described how this theory fits with behavioural and linguistic data, concerning firstly vowel systems in human languages, and secondly the perceptual organization of the speech scene. Finally a neuro-computational framework is presented in connection with recent data on the possible functional role of the motor system in speech perception.
It is a timely issue to understand the impact of bilingualism upon brain structure in healthy aging and upon cognitive decline given evidence of its neuroprotective effects. Plastic changes induced by bilingualism were reported in young adults in the left inferior parietal lobule (LIPL) and its right counterpart (RIPL) (Mechelli et al., 2004). Moreover, both age of second language (L2) acquisition and L2 proficiency correlated with increased grey matter (GM) in the LIPL/RIPL. However it is unknown whether such findings replicate in older bilinguals. We examined this question in an aging bilingual population from Hong Kong. Results from our Voxel Based Morphometry study show that elderly bilinguals relative to a matched monolingual control group also have increased GM volumes in the inferior parietal lobules underlining the neuroprotective effect of bilingualism. However, unlike younger adults, age of L2 acquisition did not predict GM volumes. Instead, LIPL and RIPL appear differentially sensitive to the effects of L2 proficiency and L2 exposure with LIPL more sensitive to the former and RIPL more sensitive to the latter. Our data also intimate that such differences may be more prominent for speakers of languages that are linguistically closer such as in Cantonese-Mandarin bilinguals as compared to Cantonese-English bilinguals.
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.
A great deal of research has examined behavioral performance changes associated with second language learning. But what changes are taking place in the brain as learning progresses? How can we identify differences in brain changes that reflect successes of learning? To answer these questions, we conducted a functional magnetic resonance imaging (fMRI) study to examine the neural activities associated with second language word learning. Participants were 39 native English speakers who had no prior knowledge of Chinese or other tonal language, and were trained to learn a novel tonal vocabulary in a six-week training session. Functional MRI scans as well as behavioral performances were obtained from these learners at two different times (pre- and post-training). We performed region of interest (ROI) and connectivity analyses to identify effective connectivity changes associated with success in second language word learning. We compared a learner group with a control group, and also examined the differences between successful learners and less successful learners within the learner group across the two time points. Our results indicated that (1) after training, learners and non-learners rely on different patterns of brain networks to process tonal and lexical information of target L2 words; (2) within the learner group, successful learners compared to less successful learners showed significant differences in language-related regions; and (3) successful learners compared to less successful learners showed a more coherent and integrated multi-path brain network. These results suggest that second language experience shapes neural changes in short-term training, and that analyses of these neural changes also reflect individual differences in learning success.
There has been virtual explosion of studies published in cognitive neuroscience primarily due to increased accessibility to neuroimaging methods, which has led to different approaches in interpretation. This review seeks to synthesize both developmental approaches and more recent views that consider neuroimaging. The ways in which Neuronal Recycling, Neural Reuse, and Language as Shaped by the Brain perspectives seek to clarify the brain bases of cognition will be addressed. Neuroconstructivism as an additional explanatory framework which seeks to bind brain and cognition to development will also be presented. Despite sharing similar goals, the four approaches to understanding how the brain is related to cognition have generally been considered separately. However, we propose that all four perspectives argue for a form of Emergentism in which combinations of smaller elements can lead to a greater whole. This discussion seeks to provide a synthesis of these approaches that leads to the emergence of a theory itself. We term this new synthesis Neurocomputational Emergentism (or Neuromergentism for short).
Code-switching, the interchangeable use of two languages, is a hallmark of bilingual language processing. Although code-switching occurs most often in spoken communication, studies examining the neural mechanisms of code-switching typically present code-switched materials visually, using reading paradigms. The present study examined intra-sentential code-switching in the auditory modality in Spanish-English bilinguals, using Event-Related Potential (ERP) and Time Frequency Representation (TFR) analyses. Specifically, this study examined whether listening to code-switched sentences is associated with lexical-semantic integration (indexed by an N400 effect) or sentence-level reanalysis (indexed by an LPC effect), and the extent to which neural patterns associated with listening to code-switched speech are modulated by switching direction (from the dominant language to the weaker language, or vice versa). ERP results showed that listening to a switch from the dominant to the weaker language elicits N400 and LPC effects, while TFR results showed a power decrease in the upper beta frequency band. In contrast, listening to a switch from the weaker to the dominant language elicited only an N400 effect, while TFR results showed a power increase in the alpha frequency band. The findings indicate that cognitive processes involved in listening to intra-sentential code-switches vary by switching direction. More specifically, we propose that listening to dominant-to-weaker language switches engages lexical processes in addition to sentence-level reanalysis to integrate the weaker language into the sentence frame, whereas weaker-to-dominant switches engages lexical-semantic integration accompanied by inhibition processes (i.e., listeners inhibit their dominant language as the sentence unfolds in their weaker language, and this inhibition must be released upon hearing a switch into the dominant language).
This study tested semantic and grammatical processing of native- and foreign-accented speech. Monolinguals with little experience with foreign-accented speech listened to sentences spoken by foreign-accented and native-accented speakers while their brain activity was recorded using EEG/ERPs. We gathered behavioral measures of sentence comprehension, language attitudes, and accent perception. Behavioral results showed that listeners were highly accurate in comprehending both native- and foreign-accented sentences. ERP results showed that grammatical and semantic violations elicited different neural responses in native versus foreign accented speech. Native-accented speech elicited a frontal negativity (Nref) for grammatical violations and a robust N400 for semantic violations. However, in foreign-accented speech only semantic (not grammatical) violations elicited an ERP effect, a late negativity. Closer inspection of listeners who did and who did not correctly identify the foreign accent revealed that listeners who identified the foreign accent showed ERP responses for both grammatical and semantic errors: an N400-like effect to grammatical errors and a late negativity to semantic errors. In contrast, listeners who did not correctly identify the foreign accent showed no ERP responses to grammatical errors in the foreign-accented condition, but did show a late negativity to semantic errors. These findings provide novel insights into understanding the effects of listener experience and foreign-accented speaker identity on the neural correlates of language processing.
Mild traumatic brain injury (mTBI) represents a condition whose cognitive and behavioral sequelae are often underestimated, even when it exerts a profound impact on the patients' every-day life. The present study aimed to analyze the features of narrative discourse impairment in a group of adults with mTBI. 10 mTBI non-aphasic speakers (GCS > 13) and 13 neurologically intact participants were recruited for the experiment. Their cognitive, linguistic and narrative skills were thoroughly assessed. The group of mTBIs exhibited normal phonological, lexical and grammatical skills. However, their narratives were characterized by the presence of frequent interruptions of ongoing utterances, derailments and extraneous utterances that at times made their discourse vague and ambiguous. They produced more errors of global coherence [ (1; 21) = 24.242; = .000; = 0. 536] and fewer Lexical Information Units [ (1; 21) = 7.068; = .015; = .252]. The errors of global coherence correlated negatively with non-perseverative errors on the WCST ( = −.755; < .012). The macrolinguistic problems made their narrative samples less informative than those produced by the group of control participants. These disturbances may reflect a deficit at the interface between cognitive and linguistic processing rather than a specific linguistic disturbance. These findings suggest that also persons with mild forms of TBI may experience linguistic disturbances that may hamper the quality of their every-day life.
Phonological awareness is widely recognized as an important component of L2 reading. Phonological awareness is also considered a primarily metalinguistic skill not affected by the individual's L2 language proficiency, or by L1-L2 linguistic distance. The current paper takes a different perspective on L2 phonological awareness. It argues that L2 phonological awareness is affected by L2 language-specific factors, and that these factors may be as equally implicated in phonological awareness in L2 as the metalinguistic insight that words may be broken down into phonological units – often considered the hallmark of the phonological awareness construct. In support of this claim, we discuss two types of evidence. The first concerns significant differences between phonological awareness in L1 and L2, as well as a significant correlation between L2 oral language proficiency and phonological awareness in L2. The second concerns linguistic distance and the effect on L2 phonological awareness of phonological differences between L1 and L2. Both pieces of evidence are used to promote the argument that it is important to view phonological awareness in L2 as a two-dimensional construct encompassing a metalinguistic component, which may be metalinguistic in nature and language-independent, and a linguistic component which is language-specific and reflects phonological representations in L2.
According to the simple view of reading (SVR), reading comprehension is the product of word decoding and listening comprehension. Against this background, we examined the additional role of early lexical quality in the prediction of reading comprehension, either directly or indirectly via word decoding or listening comprehension. Following a longitudinal design, 566 children learning to read Dutch as L1 and 463 children learning to read Dutch as L2 in the Netherlands were tested on indicators lexical quality (LQ) in kindergarten (speech decoding, morphological knowledge and vocabulary); word decoding and listening comprehension in first grade; and then reading comprehension in second grade. The results showed L2 learners to consistently lag behind L1 readers on all measures except for word decoding. Both word decoding and listening comprehension predicted later reading comprehension for not only L1 but also L2 learners. However, later reading comprehension was also directly predicted by the children's early morphological and vocabulary knowledge, on the one hand, and indirectly by speech decoding and morphological knowledge via word decoding and indirectly by morphological and vocabulary knowledge via listening comprehension. These results show the beginning reading achievement of both L1 and L2 learners to be largely predicted by the quality of their early lexicons.
Which types of nerve cell circuits enable humans to use and understand meaningful signs and words? Philosophers were the first to point out that the arbitrary links between signs and their meanings differ fundamentally between semantic word types. Neuroscience provided evidence that semantic kinds do indeed matter: Brain diseases affect specific semantic categories and leave others relatively intact. Patterns of precisely timed brain activation in specific areas of cortex reflect the comprehension of words with specific semantic features. The classic referential link between words and the objects they are used to speak about can be understood as a result of associative learning driven by correlated neuronal activity in perisylvian language areas and sensory, especially higher visual but also olfactory, somatosensory and auditory, areas. However, the meaning of words used to speak about actions calls for a different account. . In fact, after learning, the action system is sparked when such words and utterances are being used or understood, and, correspondingly, functional changes in the brain’s motor system influence the recognition of action-related expressions. These results show that language is “woven into action” at the level of the brain. and their brain basis, including emotional-affective, abstract and combinatorial aspects of meaning. All of these aspects and corresponding neuronal circuit types interact in the processing of the meaning of words and sentences in the human mind and brain. ► A neurobiological model is offered that specifies three components of the human brain’s semantic system. ► An action—perception system embodying word and sentence understanding in category-specific sensorimotor circuit activation involving a range of cortical areas. ► An affective-emotional system supporting comprehension by activity in limbic circuits. ► A combinatorial system joining together linguistic representation according to their co-occurrence in sentence strings through combinatorial neuronal assemblies in left-perisylvian cortex. ► Brain imaging data, neuropsychological evidence and neurocomputational simulation studies are discussed in light of this model.
Contemporary research papers have highlighted the issue of lesion-aphasia discordance in reference to the classic ‘associationist’ model provided by Wernicke-Lichtheim. The objective of the present study is to explore frequency, pattern and evolution of lesion-discordant aphasia following first ever acute stroke in Bengali-speaking subjects. Bengali version of Western Aphasia Battery, a validated scale, was used for language assessment in our study subjects. Lesion localization was done by using Magnetic resonance imaging (MRI) (3T) for ischemic stroke (if not contraindicated) and computed tomography (CT) for hemorrhagic stroke. Among 515 screened cases of first-ever acute stroke, 208 presented aphasia. Language assessment was done between 7 and 14 days in all study subjects and was repeated between 90 and 100 days in patients available for follow-up. Ischemic stroke cases with contraindication for MRI underwent CT imaging. Discordance between lesion and aphasic phenotype was determined only for right-handed subjects with cortical involvement (isolated or in combination with sub-cortical white matter) in the left hemisphere. Appropriate statistical tests were used to analyze the collected data. Lesion-aphasia discordance was found in 20 out of 134 patients with aphasia who were dextral and had cortical involvement in the left hemisphere (14.92%). The pattern of discordance observed were- posterior lesion with Broca's aphasia (4; 20%); posterior lesion with global aphasia (8; 40%); anterior lesion with global aphasia (4; 20%), and posterior lesion with mixed transcortical aphasia (4; 20%). On univariate analysis, the factors significantly associated with lesion-aphasia discordance were hemorrhagic stroke (p = 0.000); posterior perisylvian location (p = 0.002), and higher education (p = 0.048). After adjusting for all other variables, hemorrhagic stroke was found to have strong association with lesion-aphasia discordance (p = 0.001, OR = 11.764, 95% CI, 2.83–50.0). Discordant cases were more likely to recover or change to a milder type compared to concordant cases (p = 0.007, OR = 11.393, 95% CI, 1.960–66.231), after adjusting for all other variables including initial severity of aphasia (p = 0.006, OR = 8.388, 95% CI, 1.816–38.749). Lesion-aphasia discordance following acute stroke is not uncommon among Bengali-speaking subjects. In the discordant group, preponderance towards non-fluent aphasia was observed. Discordance occurred more frequently after hemorrhagic stroke. Subjects with lesion-discordant aphasia presented better recovery during early post-stroke phase.
Native listeners process and understand homophones, such as ‘the phrase’ vs. ‘the speech’, both [lalɔkysjɔ̃], without much semantical ambiguity in connected speech. Yet, behavioral experiments show that disambiguation is partial under intra-speaker variability without semantical context. To investigate electrophysiological correlates of perception of non-contrastive subphonemic features in French homophonous sequences, we examined the event-related potential Mismatch Negativity (MMN) using a multitoken stimuli oddball paradigm. Stimuli were taken from multiple natural productions of nominal homophonous utterances. In the first experiment, we used the first syllables, while in the second experiment, the whole utterances. The homophonous sequence elicited an MMN response in both experiments. This suggests that non-contrastive acoustic features that differentiate homophones, such as pitch and duration, are robust enough despite intra-speaker variability to allow listeners to automatically extract regularities associated with each utterance. This ability of the perception system might contribute to correct segmentation and comprehension of ambiguous utterances.
Tracking and updating emotional information in daily language use is essential for successful comprehension and communication. Using an event-related potential technique, we investigated how the updating of emotional information was influenced by changes in topic with two-pair conversational discourses. The first pair established a topic and a kind of emotional information. The second pair either maintained or changed the topic of the first pair. The description of the topic within the second pair contained a critical word that either maintained or shifted the emotional valence of the first pair. Event-related potentials (ERPs) were recorded for both the topic words and the emotion words. We found that the topic-shifted words elicited a larger P2, a larger N400, and a larger late positive component (LPC) than the topic-maintained words. More importantly, we found that emotion updating elicited an enhanced sustained N400 in the topic-maintained discourses. On the other hand, emotion updating induced a pronounced LPC in the topic-shifted discourses. These results suggest that topic shift captures more cognitive resources for its own processing and new substructure building, which further affects the processing of emotion updating. Our findings demonstrated an active use of topic information in guiding emotion updating during natural language comprehension.
Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
Atypical brain lateralization patterns in processing both human faces (reduced right lateralization) and alphabetic languages (reduced left lateralization) have been found in autism spectrum disorder (ASD). Yet, whether Chinese children with ASD show similar atypical brain lateralization patterns in processing faces and language is largely unknown. The aim of the present study was to examine this issue with N170, an event-related potential (ERP) component responsible for faces and visual words. Twenty Chinese children with ASD and 18 typically developing children participated in the study. ERPs were recorded while participants were presented with Chinese characters and faces. Results showed a significant right-lateralization of N170 for control children in processing both faces and characters, whereas there was no lateralization of N170 for children with ASD, either in processing faces or characters. The results suggest that Chinese children with ASD exhibited atypical lateralization in processing both faces and written words. The reduced right lateralization for processing faces in Chinese children with ASD compared with the control group was similar to the lateralization deficits demonstrated in western studies. However, the reduced right lateralization for processing Chinese characters in ASD was different from the deficit pattern of lateralization for processing alphabetic languages.
Classifiers are essential elements between numerals and nouns in Mandarin (e.g. “one- -elephant”), but whether they serve a semantic or functional/morphosyntactic role in relation to the accompanying noun has been heatedly debated in linguistics. Previous ERP research consistently supported the semantic view with findings of N400; however, the apparent meaning clash in mismatched classifier-noun pairing in these studies might render morphosyntactic processing undetected. We created two violation conditions to explore classifier-noun agreement: incongruent GE-noun combinations (replacing a specific classifier with the meaning-devoid general classifier, GE) and outright grammatical mistakes (missing a required classifier). With congruent combinations as the baseline, GE-noun combinations elicited a negativity effect strikingly similar to that induced by the grammatical violation condition in phrases (Experiment 1) and sentences (Experiment 2), indicating the involvement of morphosyntactic processing in classifier-noun agreement. The finding suggests that there is a middle ground for the linguistic debate over the nature of classifier selection in relation to nouns.
There are nearly 6,500 languages in the world, and they vary greatly with regard to both lexical and grammatical semantics. Hence, an early stage of utterance planning involves "thinking for speaking"—i.e., shaping the thoughts to be expressed so they fit the idiosyncratic meanings of the symbolic units that happen to be available in the target language. This paper has three main sections that cover three distinct types of crosslinguistic semantic diversity. Each type is initially elaborated with examples, and then its implications for the neurobiology of speech production are considered. These are exemplified by huge crosslinguistic differences in many domains of meaning, including colors, body parts, household containers, events of cutting and breaking, and topological spatial relations. When such differences are viewed from the perspective of contemporary neurocognitive theories which assume that most concrete concepts are subserved by both modal (i.e., sensory, motor, and affective) and transmodal (i.e., integrative) cortical systems, they imply that speakers must access language-specific semantic structures at multiple levels of representation in the brain. Some languages have whole sets of words that systematically encode two or more components of meaning. For instance, in the roughly 53 Athabaskan languages there are no generic verbs like or instead, there are entire sets of 9–13 verbs for these kinds of actions, with each set making the same distinctions between the types of objects that are given, carried, or thrown, such as animate objects, round objects, stick-like objects, flat objects, etc. This regular conflation of [action + object] in Athabaskan verb meanings predicts that speakers frequently co-activate both action-related and object-related cortical regions in a functionally integrated fashion. This kind of crosslinguistic variation involves not only the particular dimensions of experience that speakers are forced to track for grammatical purposes, but also the precise contrasts that they must make along those dimensions, with examples including systems of nominal classification, tense, and evidentiality. It is still not known exactly where or how the meanings of grammatically necessary closed-class morphemes are implemented in the brain, but it is quite clear that whatever the neural substrates of these meanings turn out to be, they are strongly influenced by crosslinguistic differences. In all, by focusing on three separate forms of crosslinguistic semantic diversity, this paper reinforces Levelt's (1989) point that "messages must be to the target language," and it also shows that this point continues to have significant consequences for neurolinguistic research on speech production.