ILCB Talk Zoom

Danielle Matthews
(Dpt of Psychology, The University of Sheffield)

From prelinguistic communication to word use in typically hearing and deaf infants

The involvement of left ventral occipitotemporal cortex in speech processing

Shuai Wang, Postdoc ILCB/LPL

The left ventral occipitotemporal cortex, also named visual word form area, plays a key role in reading. Recent evidence suggests that it is also involved in different levels of speech processing, from phoneme analysis to sentence listening. Yet, little is known about the underlying mechanisms of this cross-modal activation and the communication between this area and the spoken language system. In this talk, we are going to introduce our on-going research that addresses these issues from a network perspective by 1) applying the Graph Theory on fMRI data, and 2) examining the temporal dynamics of the communication between areas within the spoken and written language system using an intracranial EEG protocol.

Evènements ILCB

Table ronde ILCB : Interpreting machine learning in hearing, communication and language sciences

ILCB Table ronde

Interpreting machine learning in hearing, communication and language sciences: why, how, and the current challenges

Programme :

12h/12h30 – Etienne Thoret (Post-doc ILCB, PRISM, LIS) – Deciphering the acoustical bases of hearing by interpreting biomimetic deep-neural-networks (20 min + 10 min)
12h30/13h – Philippe Blache (LPL) – Is language processing incremental? A comparison between Transformer and RNN-based language models and their ability to model human language processing.  (20 min + 10 min)
13h/13h30 –  Ronan Sicre (LIS) – Visual interpretability of deep neural networks: a brief overview.  (20 min + 10 min)
13h30/13h45 Adrià Torrens (University of Ostrava) Building a grammar for gradient linguistic evaluative expressions: Do Machine learning, neuronal networks, and deep learning help? (10 min + 5min)
13h45/14h30 – Discussion (45 minutes)

Plus d'informations : Events | ILCB


ILCB Lunchtalk : Virginia B. Penhune

What we learn and when we learn it: the interaction of maturation and experience in music and language

Virginia B Penhune
Department of Psychology, Concordia University
Laboratory for Motor Control and Neural Plasticity

The impact of training or experience is not the same at all points in development. Children who learn to play a musical instrument or speak a second language early in life are often more proficient as adults. In the domain of music, a wealth of anecdotal evidence suggests that early training is important for musical skill, however, there has been little evidence directly demonstrating the impact of the age of start. To address this question, work in my laboratory has compared behavior and brain structure in early- (<7) and late-trained ( >7) adult and child musicians, showing differences in behavior and brain structure. More recently, we have compared early- and late-trained musicians with simultaneous and sequential bilinguals, showing differential effects of age-of-start in the arcuate fasciculus. I will discuss these findings in the context of our understanding of the interaction between normative development and specific experience, and describe a model of gene-environment interactions that integrates the contribution of age of start.

Website :

ILCB Lunchtalk : Abdellah Fourtassi

Using Data Science to Study Children’s Cognitive Development

Abdellah Fourtassi

Following the seminal work of Piaget, the traditional approach in cognitive development has focused on studying the structure of children’s knowledge in controlled situations (e.g., laboratory experiments). While this approach allows for precise inference about how children behave in certain tasks, it cannot provide an understanding of the social context within which knowledge emerges. In fact, it has been known, at least since Vygotsky, that children acquire new skills and concepts with the help of more competent members of society who scaffold the children’s learning, allowing them to attain skills that are just beyond their current abilities. In fact, much of the children’s abstract knowledge about the world, it has been argued, is mediated through discussions with their parents/caregivers.

In this talk, I explain how new advances in Data Science, especially in Natural Language Processing (NLP), allow us to 1) account for what and how information is presented to children by their parents through language, and 2) make precise predictions about the way this information can be used by children in controlled designs. Thus, NLP can create a fruitful synergy between controlled and naturalistic research methods. More generally, I argue that a complete theory of cognitive development requires interdisciplinary research across computer science and psychology.

Website :

ILCB Lunchtalk : Robert Zatorre

Musicians at the cocktail party: Neural correlates of bottom-up and top down mechanisms

Robert Zatorre
Montreal Neurological Institute
McGill University

Segregating sound mixtures makes demands on multiple cognitive and neural mechanisms that musical training may enhance or exploit. In a series of studies we have documented the music-related enhancement behaviorally in the context of speech in noise, and also in a selective attention context with competing speech streams. Using functional MRI, we observed that musicians’ enhanced speech-in-noise perception was associated with better decoding of speech in auditory areas at high signal-to-noise ratios (SNR), whereas under low SNR conditions the enhancement was associated with decoding in frontal and motor cortical regions. We interpret this finding as indicating a shift from bottom-up to top-down mechanisms depending on the quality of the input, with musicians being better able to deploy either mechanism as a function of the conditions. We then used MEG to look at the neural representation of competing speech streams via decoding of the neural signature (amplitude envelope) of attended vs unattended items. The behavioral advantage associated with musical training was related to enhanced ability to represent both streams in auditory cortex, consistent with their capacity to follow multiple sound streams in music. These cognitive neuroscience approaches help us to develop better models to explain why musicians are good at cocktail parties (apart from their reputed drinking abilities).

Website :

ILCB Lunchtalk : Suzanne Dikker

ILCB Lunchtalk

Suzanne Dikker
New York University, Department of Psychology & Center for Neural Science

• 12.00 Suzanne Dikker, PhD
• 13.00 Lunch

Confirm attendance (mandatory) by sending an email to

Brains in Harmony: the role of brain-to-brain synchrony in naturalistic social interactions

Neuroscience research has produced tremendous insight into how the human brain supports dynamic social interactions. Still, laboratory-generated findings do not always straightforwardly generalize to real-world environments. To fill this gap, I collaborate with scientists, artists, and educators to take neuroscience out of the laboratory, into schools, museums, and underserved neighborhoods. We consistently find a relationship between brain-to-brain synchrony and successful social interaction. For example, empathy, joint action, and social motivation predicts synchrony in dyadic interactions, and synchrony among high schoolers is related to classroom social dynamics and student engagement. Taken together, our multidisciplinary approach may provide a potential new avenue to investigate social interactions outside of the laboratory.