Salle des voûtes – Campus Saint Charles – 3 place Victor Hugo – 13003 Marseille
Stéphanie Ries
Associate Professor, San Diego State University
Why we should care about the dorso-medial prefrontal cortex in language production
12h Talk Stéphanie Ries
13h Lunch
Confirm your attendance (mandatory) by registering on the ILCB website
Abstract : The dorso-medial prefrontal cortex (dmPFC), including the supplementary and pre-supplementary motor areas as well as the dorsal anterior cingulate cortex, has been a region of interest in studies of cognitive control and other areas of neuroscience for many years. Yet, traditional models of language production do not typically include this brain region, and its potential role in language and speech production has therefore only been more recently investigated. In this talk, I will review evidence from several research groups (including but not limited to my own) using fMRI, brain stimulation, and scalp and intracranial EEG in “monolinguals” and bilinguals with and without neurological damage that suggest that the dmPFC may play a bigger role than we previously had thought in language production. In particular, it seems likely involved in a response selection mechanism potentially taking place separately from lexical selection, in addition to speech monitoring. The implications of these findings for traditional cognitive models of language production will be discussed.
(1) Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona
(2) Institute of Neurosciences, University of Barcelona
(3) Institut de Recerca Sant Joan de Déu (IRSJD), Esplugues de Llobregat, Barcelona
Neural encoding of speech sounds in neonates and infants: developmental trajectory and modulating factors
Listening to speech in your native language is easy. Recognizing the words spoken in conversation is generally an automatic and smooth everyday process in the first language (L1). Even in noisy or otherwise less than ideal conditions, performance is surprisingly robust. But anyone who has attempted to follow a conversation in a second language (L2) knows how demanding this can be, even when you know all the words. Even for reasonably clear speech, identifying individual words out of the speech stream is difficult. Charles and Trenkic (2015) reported that international university students missed about 30% of the words they heard during lectures.
For bilinguals, the perceptual processing of L2 speech sounds and the stored representations of the words themselves are influenced by the L1. For example, two words such as “lake” and “rake” may sound the same and may be stored as one word (= one homophonous pronunciation /leik/ for two concepts) for Japanese learners of English, because the /r/-/l/ distinction is difficult to perceive and represent for them due to the lack of this distinction in their L1. One consequence of these effects is the difficulty to know which word to activate when hearing /leik/, but also the difficulty to learn to pronounce the words differently.
Yet, many questions remain as to how bilinguals store the phonological form of words (their pronunciation) in the corresponding lexical entry in long-term memory, and how these representations change over time. Our lab has obtained evidence for dissociations between perception and lexical storage, which suggest that even after perception of a difficult phonological dimension improves, modifying lexical representations that use this phonological dimension remains hard. This means that even after a Japanese learner learns to distinguish /r/ from /l/, their representations of the words might still be the same.
In this talk I will outline research conducted in my lab to understand the phonological structure of the bilingual mental lexicon, how words are stored, and whether (and how) bilinguals are able to update previously inaccurate lexical representations.