Half a century of speech prosody research in Aix-en-Provence

In its "Bookshop" section, CNRS SHS highlights the latest book by Daniel Hirst, Emeritus Research Director at the LPL, published last June by Springer: "Speech Prosody: From Acoustics to Interpretation".

Co-founder of SProSIG in 2000, an international group for the study of speech prosody under the umbrella of ISCA and IPA, and organizer of the first international congress on prosody in 2002 (which became "Speech Prosody"), the author presents a personal vision of speech prosody in general, and more specifically of the various themes in which he has been interested for several decades. Topics covered include the acoustic description of prosody, its transcription, the relationship between lexical and non-lexical prosody, the nature of prosodic structure, the phonology of prosody, the modeling of speech rhythm and melody, and the central question of the varied and sometimes rather mysterious ways in which prosody contributes to the interpretation of utterances. In his final chapter, Daniel Hirst outlines the directions he believes will be most productive and fruitful for future research into speech prosody.

Reference: Daniel Hirst. Speech Prosody: From Acoustics to Interpretation. Springer Verlag, Berlin. Juin 2024. https://link.springer.com/book/9783642407710

 

Credits: Springer Verlag

Difficulties in learning specialty vocabulary at school: the case of opaque verbs

We are pleased to announce the publication of the latest article by Núria Gala and Marie-Noëlle Roubaud, two AMU senior lecturers at the LPL, in collaboration with Ludivine Javourey-Drevet of the SCALab (Villeneuve d'Ascq) in the journal “Lexique”:

Reference: Núria Gala, Marie-Noëlle Roubaud, Ludivine Javourey-Drevet. La difficulté d'apprentissage du vocabulaire de spécialité à l'école: le cas des verbes opaques. Lexique, July 2024, 34. ⟨hal-04580153

Full-text article (in French): https://www.peren-revues.fr/lexique/1727

Abstract:
This work has been carried out with the aim of shedding light on the lexical knowledge of middle-school learners regarding vocabulary from domain-specific texts. We analyse a series of opaque verbs (polysemous, frequent in history and science textbooks) and we draw up an assessment of the lexical knowledge of 219 children from grades 4 and 5 (aged 9 to 11) in different schools in France. We also show the strategies used by learners to respond to the proposed task of writing a sentence with a given verb out of context.

 

Crédits d’image : Drazen Zigic sur Freepik

 

New article published in “Language, Cognition and Neuroscience”

Thanks to an electroencephalographic study, two CNRS researchers at LPL have shown that, in native French speakers, an accentual difference on the isolated word has no impact on its recognition. This result suggests that, for these speakers, acoustic cues related to accentuation are treated as noise and are swept out of the speech signal during the word recognition process.

Reference: Dufour, Sophie & Michelas, Amandine (2024). Does a mismatch on the accentual pattern of French words affect the magnitude of the repetition priming effect? An ERP investigation. Language, Cognition and Neuroscience, 1-8.

The article on HAL: https://hal.science/hal-04601476
(Currently embargoed)

 

 

Crédits : Pixabay / GDJ

 

A neuro-cognitive model of comprehension based on prediction and unification

We are pleased to announce the publication of the latest article by Philippe Blache, a CNRS researcher at the LPL, in the journal Frontiers in Human Neuroscience, dedicated to the language model also discussed at the conference held at the Collège de France last February:

Reference: Philippe Blache. A neuro-cognitive model of comprehension based on prediction and unification. Frontiers in Human Neuroscience, 2024, 18.  

Full text article: https://doi.org/10.3389/fnhum.2024.1356541

 

Credits: Ph. Blache

Phonological Decoding and Morpho-Orthographic Decomposition: Complementary Routes During Learning to Read

We are pleased to announce the publication by Brice Brossette - as first author in collaboration with other researchers - of a new article in the Journal of Experimental Child Psychology:

Reference: Brice Brossette, Élise Lefèvre, Elisabeth Beyersmann, Eddy Cavalli, Jonathan Grainger, Bernard Lété. Phonological Decoding and Morpho-Orthographic Decomposition: Complementary Routes During Learning to Read. Journal of Experimental Child Psychology, 2024, 242, ⟨10.31234/osf.io/qeynj⟩⟨hal-04421017v2⟩

Full text article: https://hal.science/hal-04421017v2

Brice is a post-doctoral fellow at the LPL within the framework of the AMPIRIC-funded DREAM project, which aims to gain a better understanding of the factors involved in children's exposure to the written word in the first year of primary school, in order to develop a programme of personalised recommendations for learning to read. He coordinates this project together with Stéphanie Ducrot (LPL/CNRS).

 

Photo de Michał Parzuchowski sur Unsplash

What role does neurofibromatosis type 1 play in learning to read?

We are pleased to announce the recent publication of the latest article produced by Marie Vernet (neuropsychologist and former doctoral student at LPL), Stéphanie Ducrot (CNRS researcher at LPL) and Yves Chaix (Tonic, CHU Toulouse) which includes a systematic review on visual processing deficits in neurofibromatosis type 1 and the impact on learning to read.

It follows the study “The determinants of saccade targeting strategy in neurodevelopmental disorders: The influence of suboptimal reading experience” published in 2023 in the journal Vision Research.

Reference: Marie Vernet, Stéphanie Ducrot, Yves Chaix. A systematic review on visual-processing deficits in Neurofibromatosis type 1: what possible impact on learning to read?. Developmental Neuropsychology, 2024

Editors Website: https://www.tandfonline.com/doi/full/10.1080/87565641.2024.2326151

Full text article: https://hal.science/hal-04504105

 

Credits: The authors

Music as a therapeutic tool in early childhood

Clément François (CNRS researcher, LPL) and Solène Pichon (nursery nurse, Dijon University Hospital) have just co-authored a chapter in the book "Musique, sciences et santé", part of the Nouveaux chemins de santé collection published by DUNOD and edited by Gérard Mick (neurologist, Voiron Hospital) and Emanuel Bigand (professor of cognitive psychology, LEAD, Dijon):

Clément François, Solène Pichon. Music as a therapeutic tool in early childhood. E. Bigand; G. Mick. Musique, sciences et santé, Dunod, To be published, Nouveaux chemins de la santé, 9782100800261. hal-04367008

Link to the full text (in French) :https://amu.hal.science/LPL-AIX/hal-04367008v1

A more detailed version of this text will be included in an Oxford Handbook that Giulia Danielou and Clément François are currently preparing for publication in 2025.

 

Credits:
Photo: jasmin82 by Pixabay / Illustration: C. François and S. Pichon

Why do we perceive the same sounds in the same way?

Noël Nguyen, Leonardo Lancia and Lena Huttner from the LPL, in collaboration with researchers from GIPSA-Lab and LPNC, have just published the first Registered Report in Glossa Psycholinguistics, an online Fair Open Access journal:

Nguyen, N., Lancia, L., Huttner, L., Schwartz, J., & Diard, J. (2024). Listeners' convergence towards an artificial agent in a joint phoneme categorization task.Glossa Psycholinguistics, 3(1). http://dx.doi.org/10.5070/G6011165

Abstract and full text: https://escholarship.org/uc/item/0dg0g4kn

 

Credits: The authors

New publication on multimodal and interactional humor

We are pleased to announce the publication of a new reference work on multimodal and interactional humor, coordinated by Béatrice Priego-Valverde, lecturer at AMU and member of the LPL.

Link to the publisher's website: https://www.degruyter.com/document/doi/10.1515/9783110983128/html

Abstract:
The central question explored in this volume is: How is humor multimodally produced, perceived, responded to, and negotiated? To this end, it offers a panorama of linguistic research on multimodal and interactional humor, based on different theoretical frameworks, corpora, and methodologies. Humor is considered as an activity that is interactionally achieved, regardless of whether the interaction in which it is embedded is face-to-face, computer-mediated, with a human or a robot, oral or written. The aim is to analyze both the linguistic resources of the participants (such as their lexicon, prosody, gestures, gazes, or smiles) and the semiotic resources that social networks and instant messaging platforms offer them (such as memes, gifs, or emojis).

SMAD: LPL software to measure the intensity of smile

We are pleased to announce the publication of the article "Automatic tool to annotate smile intensities in conversational face-to-face interactions" by Stéphane Rauzy (CNRS research engineer) and Mary Amoyal (former LPL doctoral student) in the journal Gesture.

It can be downloaded free of charge from the HAL platform: https://hal.science/hal-04194987/

Reference: Stéphane Rauzy, Mary Amoyal. Automatic tool to annotate smile intensities in conversational face-to-face interactions. Gesture, September 2023 ⟨10.1075/gest.22012.rau⟩. ⟨hal-04194987⟩

Abstract:
This study presents an automatic tool that allows to trace smile intensities along a video record of conversational face-to-face interactions. The processed output proposes a sequence of adjusted time intervals labeled following the Smiling Intensity Scale ( Gironzetti, Attardo, and Pickering, 2016 ), a 5 levels scale varying from neutral facial expression to laughing smile. The underlying statistical model of this tool is trained on a manually annotated corpus of conversations featuring spontaneous facial expressions. This model will be detailed in this study. This tool can be used with benefits for annotating smile in interactions. The results are twofold. First, the evaluation reveals an observed agreement of 68% between manual and automatic annotations. Second, manually correcting the labels and interval boundaries of the automatic outputs reduces by a factor 10 the annotation time as compared with the time spent for manually annotating smile intensities without pretreatment. Our annotation engine makes use of the state-of-the-art toolbox OpenFace for tracking the face and for measuring the intensities of the facial Action Units of interest all along the video. The documentation and the scripts of our tool, the SMAD software, are available to download at the HMAD open source project URL page https://github.com/srauzy/HMAD.

 

Photo credits: S. Rauzy & M. Amoyal