Del 06/11/2015 to 06/12/2015
Auditori Poblenou
Organized by Center for Brain and Cognition
* SEPEX conference
TITLE: Breaking bilingual education rules
For decades, bilingual schools have tried to avoid language-mixing in the context of a given subject, while promoting code switching across subjects. This "one subject - one language" rule emerges as an educational response to the scientifically ungrounded assumption that language-mixing could be detrimental for concept acquisition and consolidation. However, recent studies suggest that language-mixing could be beneficial for learning, and that the simultaneous use of bilinguals' two languages within the context of a given academic subject does not yield any specific detriment in learners' performance. This issue will be discussed in depth in the current talk by presenting evidence from a series of behavioral studies testing monolingual and bilingual concept acquisition in adults and children with different degrees of proficiency in their two languages.
TITLE: Recent developments on determiner production across languages
Determiners (English examples are “the”, “a/an”, “this”, “my”) have been identified as an interesting test-bench for testing psycholinguistic processing hypothesis. Determiner production, or agreement, can be constrained by a variety of linguistic information (e.g. gender, number, phonology, case, etc.), and these constrains vary from one language to another. I will present recent research where we have further explored the processes underlying determiner agreement production across languages and in bilinguals.
TITLE: Learning a new language: from first exposure to the proficient speaker
Learning a new language (L2) starts with the first exposure and takes years until a more or less stable state of variable proficiency is reached. I will present a series of experiments covering the first minutes of L2 contact over the first weeks and months to L2 processing in proficient speakers. Different aspects of the new language are learned at different stages and for all aspects there are good and not-so-good learners. Some of the data link learning success to neural correlates. Other data simply show how fast some things are learned (i.e. L2 phonotactics) and how the first language may help to learn compatible aspects of the L2 (i.e. common syntactic structures), make it difficult to learn incompatible aspects (i.e. gender), and is altered to make incompatible aspects compatible (i.e. semantics).
TITLE: Neural bases of language learning and expertise
I will describe our work on language and the brain in healthy adults, where we have shown that individual differences in foreign speech sound learning are accompanied by both functional and structural brain differences. I will also describe results of structural imaging studies in phonetics experts, which provide evidence for experience-dependent structural plasticity, but also for brain structural features that likely pre-date the expertise training. Last, I will present functional imaging results on a higher-level, executive multilingual task: simultaneous interpretation. Taken together, our findings suggest that both pre-existing, possibly innate factors and environment influences (learning) play a role in determining the neural bases of language skills at low to high levels of the language processing hierarchy, with different relative contributions in different brain areas.
TITLE: Adaptive listening: the case of foreign-accented speech
Understanding a second language (L2) speaker is often perceived to be more difficult than understanding a native (L1) speaker. Speech from L2 speakers typically deviates from the standard pronunciation of a target language, i.e., it is foreign-accented, and the deviations can easily obstruct the complex processes of comprehension. Recent research, however, has shown that we can rapidly overcome initial processing difficulties and adapt to foreign-accented speech, both when listening to our native language and when listening to a second language. In this talk I will discuss underlying mechanisms and boundaries of this adaptation process and present data from a series of experiments on English interdental fricatives. Interdental fricatives are difficult for many L2 speakers of English, and learners often replace English th with other consonants, with different substitution preferences across accents. The results suggest an interesting difference in the use of knowledge about segmental variations, with L2 listeners being able to use this knowledge for immediate form activation but not for meaning activation.
TITLE: Self-monitoring in a second language
Speakers sometimes produce speech errors but fortunately they also have a self-monitoring that can detect and correct such errors. Theories of self-monitoring differ in whether they assume monitoring is based on language production or language comprehension mechanisms as well as on the precise nature of the production or comprehension representations that are involved in this process. Here we ask whether and how monitoring in a second language differs from that in the first language. To address this question, we report an experiment in which speakers describe a network of line drawings in synchrony with a red dot that traverses the network at a particular speed (Oomen & Postma, 2001). This experiment replicated Oomen and Postma’s finding that speakers interrupt and correct themselves more quickly when speech is faster. We generalized this finding to production in a second language and showed that when speech rate was identical for each language (and therefore relatively slow in the first language and relatively fast in the second language), interruption and correction times were very similar for each language. The findings suggest that monitoring in a second language differs quantitatively but not qualitatively from monitoring in the first language.
TITLE: Predicting what and when to attend in language learning
Learning a first or second language involves detecting and extracting the systematic structure of a complex and rapid sequence of sounds. Words and rules are therefore temporal sequences of predictive elements. In accordance with this, a potential learning mechanism appears to be an attentional tuning suited to the temporal characteristics of the information being acquired. This temporal tuning is sensitive to the presence of co-occurring elements. Two different types of attention are engaged: a more fast and automatic capture of attention by auditory cues that are predictive of forthcoming information and a more controlled attention that leads to more conscious knowledge of the acquired information. In this talk I will review behavioral and electrophysiological evidence supporting the different mechanisms of temporal tuning of attention during language learning and their neuroanatomical substrate.
TITLE: A neural assembly based view on words in the bilingual brain
In this talk I will propose a tentative framework of how words in two languages could be organized in the cerebral cortex according to neural assembly theory. Neural assembly theory, of which Hebbian-based learning models are perhaps the most representative, assumes that cortical cells which fire synchronously in response to a particular mental event wire together forming a distributed functional unit (assembly) which represents that mental event as a whole. Extrapolating this neurobiological principle to language the cerebral footprint of a word is thought to be engendered by widely distributed cell assemblies where the different linguistic constituents (e.g., semantic, lexical, phonological and articulatory properties) are grouped together in action, perception and domain-general (integrating) brain systems which become activated in parallel. First I will discuss some recent evidence supporting the notion of neural assemblies representing words in language. Next I will propose how this view may be generalized to bilingualism. In short, I suggest that words in the two languages of a bilingual, parallel to monolingual language processing, are represented as a whole in distributed neural circuits which ignite in parallel and where overlapping distributions between the word assemblies in a bilingual's first and second language denotes similarity (generalization) and distinct local topographies between the L1 and L2 assemblies denotes specificity (individuation). To conclude, I will discuss how this framework can generate precise and novel predictions about important bilingual topics such as cross-language automaticity and control, the representation and dynamics of cognates and where in the bilingual system language membership can be situated.
TITLE: Sources of variability in bilingual language control
Proactive and reactive attentional processes have been proposed as candidate mechanisms for language control in bilingual language selection (Morales, Yudes, Gómez-Ariza & Bajo, 2014). In fact, the bilingual’s superiority in some cognitive tasks has been associated to the use of language control mechanisms that are triggered to prevent interference from the unintended language (e.g. Bialystok, Craik & Luk, 2012). In the present investigation, we provide data suggesting that in many situations language control is achieved by means of inhibitory mechanisms (reactive control) that suppress activation of the non-target language. However, by using procedures that permit to asses both activation and inhibition of the non intended language (negative priming with interlingual homograpghs, repeated naming and recall, etc.), we also provide data indicating that these inhibitory effects are not always evident and that their presence depends on the activation of the non-intended language and on the bilinguals’ language experience. Thus, in agreement with recent proposals (Green & Abulatebi, 2013) our data suggests that factors such as L2 fluency, immersion in L2 and training in translation influence the processes involved in language selection. Finally, we also provide evidence that these differences in language control for the bilinguals also generalise to the type of executive functions that are enhanced by the bilingual experience.
TITLE: The development of four signatures of adult speech perception
Human language relies on a unique speech code that allows adult speakers to create a virtually infinite number of words by combining consonants and vowels in a lawful manner. This faculty is at the core of language productivity. Adult speech perception is characterized by the representation of abstract categories, which are manifested by at least four observable signatures: (1) categorical perception of consonants; (2) great difficulty in perceiving nonnative speech sounds (native perception); (3) resolution of the lack of invariance problem; (4) greater reliance on consonants than on vowels for identifying words. Categorical perception is known to be observable shortly after birth, whereas native perception emerges between the ages of 6 and 12 months. In this talk, I will investigate the development of the last two signatures in infancy. Using eye-tracking and pupillometry techniques, I will show that the consonantal bias emerges along similar timelines as native perception, whereas the lack of invariance problem may be solved earlier. I will discuss how the sudies of bilingual infants may bring answers to some open questions.
Legal notice | Contact Symposium event management platform Copyright © 2023