How does our brain process prosody?: Shedding light on the involvement of Heschl’s gyrus

A long-standing international collaboration between James S. German (LPL/amU) and Barath Chandrasekaran (University of Pittsburgh) has taken shape in a multidisciplinary study just published in Nature Communications:

Reference: G. Nike Gnanateja, Kyle Rupp, Fernando Llanos, Jasmine Hect, James S German, Tobias Teichert, Taylor J Abel, Bharath Chandrasekaran. Cortical processing of discrete prosodic patterns in continuous speech. Nature Communications, 2025, 16 (1), pp.1947.

Full-text article: https://www.nature.com/articles/s41467-025-56779-w
Link to HAL database: https://hal.science/hal-04973949v1
EurekAlert news release: https://www.eurekalert.org/news-releases/1075245

Abstract:
For years, scientists thought that all aspects of prosody were essentially processed in the superior temporal gyrus, a brain region known for speech perception. Through analysis of intra-cortical recordings in humans and non-human primates (Macaque), this study reveals that Heschl's Gyrus is a cortical region crucial to prosody perception, processing melodic accents as abstract phonological units. These findings inform neurolinguistic models of prosody processing by expanding the role of Heschl's Gyrus in speech processing beyond the low-level representations suggested so far. They also have important theoretical implications for linguistics since, in line with what has been proposed by metrical self-segmental theory, they show that melodic accents are linguistic categories of a discrete nature.

 

Description of figures 1 and 2
Credits: Authors

Listen or watch each other speaking

Marc Sato, CNRS researcher at LPL, has just published an article in the Cortex journal on the distinct influence of motor and visual predictive processes on auditory cortical processing during speech production and perception.

 Reference: Marc Sato. Motor and visual influences on auditory neural processing during speaking and listening. Cortex, 2022, 152, 21-35 (https://doi.org/10.1016/j.cortex.2022.03.013)

 You will find the full text of the article under this direct link or via the AMU search interface.

 

Photos credits: Antoine Doinel

How does the brain process visual information associated with speech sounds?

We are pleased to announce the publication of the latest article by Chotiga Pattamadilok and Marc Sato, CNRS researchers at LPL, entitled “How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing” in the journal Brain and Language.

Reference:
Chotiga Pattamadilok, Marc Sato. How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing. Brain and Language, Elsevier, 2022, 225, ⟨10.1016/j.bandl.2021.105058⟩⟨hal-03472191v2⟩

Full text on open science database HAL: https://hal.archives-ouvertes.fr/hal-03472191v2

Contact: chotiga.pattamadilok@lpl-aix.fr

Can we predict what is happening in the brain while we are speaking?

Youssef Hmamouche (LPL post-doc) and Laurent Prévot (AMU professor and director of the LPL) - in collaboration with Magalie Ochs (LIS) and Thierry Chaminade (INS) - have just published an article about the BrainPredict tool, which aims to predict and visualize brain activity during human-human or human-robot conversations. The first experiments were carried out with 24 adult participants engaging in natural conversation, which lasted approximately 30 minutes. The first promising results open the way for future studies where there is integration, for example, of other sociolinguistic parameters, or aspects linked to certain language pathologies.

European project COBRA – Call for 15 PhD projects now open!

As part of the European COBRA project, a call for applications is open for 15 doctoral contracts. Application files must be submitted before March 31, 2020 on the website http://conversationalbrains.eu.

COBRA (Conversational Brains) is a project carried out within the framework of the European Marie Skłodowska-Curie Innovative Training Networks program. It brings together 14 partners in 10 countries (France, Great Britain, Italy, Slovakia, Belgium, Germany, Sweden, Netherlands, Finland, Hong Kong), including 10 academic partners and 4 industrial partners. COBRA is a continuation of the European MULTI project previously carried out by the LPL, and is closely linked to the ILCB Institute. It aims to develop research and advanced training in the field of relationships between brain and language, in human-human and human-machine conversational interactions, and in a wide variety of languages. COBRA is coordinated by Noël Nguyen.