Listening to speech in your native language is easy. Recognizing the words spoken in conversation is generally an automatic and smooth everyday process in the first language (L1). Even in noisy or otherwise less than ideal conditions, performance is surprisingly robust. But anyone who has attempted to follow a conversation in a second language (L2) knows how demanding this can be, even when you know all the words. Even for reasonably clear speech, identifying individual words out of the speech stream is difficult. Charles and Trenkic (2015) reported that international university students missed about 30% of the words they heard during lectures.
For bilinguals, the perceptual processing of L2 speech sounds and the stored representations of the words themselves are influenced by the L1. For example, two words such as “lake” and “rake” may sound the same and may be stored as one word (= one homophonous pronunciation /leik/ for two concepts) for Japanese learners of English, because the /r/-/l/ distinction is difficult to perceive and represent for them due to the lack of this distinction in their L1. One consequence of these effects is the difficulty to know which word to activate when hearing /leik/, but also the difficulty to learn to pronounce the words differently.
Yet, many questions remain as to how bilinguals store the phonological form of words (their pronunciation) in the corresponding lexical entry in long-term memory, and how these representations change over time. Our lab has obtained evidence for dissociations between perception and lexical storage, which suggest that even after perception of a difficult phonological dimension improves, modifying lexical representations that use this phonological dimension remains hard. This means that even after a Japanese learner learns to distinguish /r/ from /l/, their representations of the words might still be the same.
In this talk I will outline research conducted in my lab to understand the phonological structure of the bilingual mental lexicon, how words are stored, and whether (and how) bilinguals are able to update previously inaccurate lexical representations.
Abstract:
The left ventral occipitotemporal cortex, also named visual word form area, plays a key role in reading. Recent evidence suggests that it is also involved in different levels of speech processing, from phoneme analysis to sentence listening. Yet, little is known about the underlying mechanisms of this cross-modal activation and the communication between this area and the spoken language system. In this talk, we are going to introduce our on-going research that addresses these issues from a network perspective by 1) applying the Graph Theory on fMRI data, and 2) examining the temporal dynamics of the communication between areas within the spoken and written language system using an intracranial EEG protocol.
Interpreting machine learning in hearing, communication and language sciences: why, how, and the current challenges
Programme :
12h/12h30 – Etienne Thoret (Post-doc ILCB, PRISM, LIS) – Deciphering the acoustical bases of hearing by interpreting biomimetic deep-neural-networks (20 min + 10 min)
12h30/13h – Philippe Blache (LPL) – Is language processing incremental? A comparison between Transformer and RNN-based language models and their ability to model human language processing. (20 min + 10 min)
13h/13h30 – Ronan Sicre (LIS) – Visual interpretability of deep neural networks: a brief overview. (20 min + 10 min)
13h30/13h45 Adrià Torrens (University of Ostrava) Building a grammar for gradient linguistic evaluative expressions: Do Machine learning, neuronal networks, and deep learning help? (10 min + 5min)
13h45/14h30 – Discussion (45 minutes)
What we learn and when we learn it: the interaction of maturation and experience in music and language
Virginia B Penhune
Department of Psychology, Concordia University
Laboratory for Motor Control and Neural Plasticity
https://www.concordia.ca/artsci/psychology/research/penhune-lab.html
The impact of training or experience is not the same at all points in development. Children who learn to play a musical instrument or speak a second language early in life are often more proficient as adults. In the domain of music, a wealth of anecdotal evidence suggests that early training is important for musical skill, however, there has been little evidence directly demonstrating the impact of the age of start. To address this question, work in my laboratory has compared behavior and brain structure in early- (<7) and late-trained ( >7) adult and child musicians, showing differences in behavior and brain structure. More recently, we have compared early- and late-trained musicians with simultaneous and sequential bilinguals, showing differential effects of age-of-start in the arcuate fasciculus. I will discuss these findings in the context of our understanding of the interaction between normative development and specific experience, and describe a model of gene-environment interactions that integrates the contribution of age of start.