SPPAS honored at the CNRS Institute for Humanities and Social Science (INSHS)

The INSHS has just published a nice article about the SPPAS software rewarded at the beginning of February on the occasion of the Open Science European Conference (see announcement below).

Online article (in French): Le logiciel SPPAS récompensé lors de la remise des Prix science ouverte du logiciel libre de la recherche | INSHS (cnrs.fr)

More information: SPPAS software rewarded at the "Open science prize for free research software" - Laboratoire Parole et Langage (lpl-aix.fr)

How does the brain process visual information associated with speech sounds?

We are pleased to announce the publication of the latest article by Chotiga Pattamadilok and Marc Sato, CNRS researchers at LPL, entitled “How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing” in the journal Brain and Language.

Reference:
Chotiga Pattamadilok, Marc Sato. How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing. Brain and Language, Elsevier, 2022, 225, ⟨10.1016/j.bandl.2021.105058⟩⟨hal-03472191v2⟩

Full text on open science database HAL: https://hal.archives-ouvertes.fr/hal-03472191v2

Contact: chotiga.pattamadilok@lpl-aix.fr

Best Paper Award: Giulia Rambelli, PhD student at the LPL

Giulia Rambelli, PhD student at LPL under the supervision of Philippe Blache and Alessandro Lenci (Pisa), has just obtained the “Best Paper Award” for the scientific article entitled “Comparing Probabilistic, Distributional and Transformer-Based Models on Logical Metonymy Interpretation” of which she is the first author, alongside P. Blache, E. Chersoni, A. Lenci and C.-R. Huang.

The award was presented last Friday at the AACL-IJCNLP conference held online December 4-7 (Suzhou, China). Congratulations, Giulia!

Abstract:

In linguistics and cognitive science, Logical metonymies are defined as type clashes between an event-selecting verb and an entitydenoting noun (e.g. The editor finished the article), which are typically interpreted by inferring a hidden event (e.g. reading) on the basis of contextual cues. This paper tackles the problem of logical metonymy interpretation, that is, the retrieval of the covert event via computational methods. We compare different types of models, including the probabilistic and the distributional ones previously introduced in the literature on the topic. For the first time, we also tested on this task some of the recent Transformer-based models, such as BERT, RoBERTa, XLNet, and GPT-2. Our results show a complex scenario, in which the best Transformer-based models and some traditional distributional models perform very similarly. However, the low performance on some of the testing datasets suggests that logical metonymy is still a challenging phenomenon for computational modeling.

_____________

Update February 2nd, 2021

Giulia Rambelli's distinction highlighted in the AMU Newsletter

In its latest issue of the Newsletter of AMU, Aix-Marseille University devoted a brief to Giulia Rambelli, PhD student at LPL, and the distinction received at the AACL-IJCNLP conference last December.

Link to the brief: http://url.univ-amu.fr/lettreamu_janvier21_n85 (p. 23)

See article on www.lpl-aix.fr: Best Paper Award : Giulia Rambelli, PhD student at the LPL - Laboratoire Parole et Langage (lpl-aix.fr)

Can we predict what is happening in the brain while we are speaking?

Youssef Hmamouche (LPL post-doc) and Laurent Prévot (AMU professor and director of the LPL) - in collaboration with Magalie Ochs (LIS) and Thierry Chaminade (INS) - have just published an article about the BrainPredict tool, which aims to predict and visualize brain activity during human-human or human-robot conversations. The first experiments were carried out with 24 adult participants engaging in natural conversation, which lasted approximately 30 minutes. The first promising results open the way for future studies where there is integration, for example, of other sociolinguistic parameters, or aspects linked to certain language pathologies.

From lip- to script-reading: new ANR contract awarded

The LPL is pleased to announce that the project "From lip- to script-reading: An integrative view of audio-visual associations in language processing" (AVA), submitted by Chotiga Pattamadilok, LPL researcher, was validated in the framework of the last ANR PRC selection.

Summary:
The early exposure to speech and speakers’ articulatory gestures is the basis of language acquisition and is a fingerprint of audiovisual association learning. Is this initial ability to associate speech sounds and visual inputs a precursor of infants’ reading ability? Answering this question requires a good understanding of the cognitive/neural bases of both language abilities and whether they interact within the language system. Studies comparing task performance and spatio-temporal dynamics of brain activity associated with these abilities will be conducted. At the theoretical level, the outcome should lead to an elaboration of a unified framework explaining how multi-modal inputs jointly contribute to form a coherent language representation. At the practical level, the new perspective of a link between the developmental trajectories of “lip-reading” and “script-reading” should contribute to language learning and facilitate early detection and remediation of reading deficits.

Partners:
Laboratoire Parole et Langage, Aix-Marseille Univ. (coordinator)
Laboratoire D'Etude des Mécanismes Cognitifs, Univ. Lyon 2
Laboratoire de Psychologie et NeuroCognition, Univ. Grenoble Alpes
SFR Santé Lyon-Est, Univ. Lyon 1