24 November 2023

Séminaire inter-équipe S2S & LSD

Séminaire inter-équipe

Organisé par les équipes de recherche S2S et LSD

Vendredi 24 novembre 2023, de 9h30 à 12h30, en salle de conférences B011 au LPL

Programme :

9h30-10h30 : Angèle Brunellière (SCALab, Université de Lille) : L’adaptation des représentations linguistiques à l'issue d’une interaction sociale

Résumé : Lors d'une interaction sociale, les interlocuteurs construisent des concepts partagés dans le but d'atteindre plus facilement une compréhension mutuelle. Il reste alors à comprendre comment des concepts partagés construits pendant une interaction sociale sont intégrés au niveau représentationnel et s'ils affectent les représentations linguistiques pré-existantes des interlocuteurs. Au cours de ce séminaire, je présenterai un ensemble d'études menées en laboratoire au cours desquelles nous cherchons à savoir si de nouvelles informations partagées pendant une interaction sociale peuvent amener à une adaptation des représentations phonologiques et sémantiques après l'interaction par l'utilisation de mesures comportementales et électrophysiologiques à travers une multitude de tâches (par exemple, discrimination, reconnaissance, lecture). L'ensemble de ces travaux apportent une meilleure compréhension dans les liens entre mémoire et interaction sociale en caractérisant la nature dynamique des représentations linguistiques stockées en mémoire à long terme et en considérant la notion de concepts partagés au niveau représentationnel.

10h30-11h : pause café

11h-11h20: James German (LPL) : Implicit social cues influence the interpretation of intonation

Résumé : Individuals may have significant experience with more than one variety of their native language. If the “non-native” varieties are saliently linked to specific social identities, then an individual’s production or perception can be biased towards a particular variety through contextual cues that exemplify the associated identity. Here we explore whether the interpretation of intonational patterns by Singapore English (SgE) listeners is influenced by implicit social cues linked to either Singaporean or American cultural identity. Crucially, the relationship between accentuation and pronoun reference is robust in American English, but weak in SgE. This relationship depends on the fact that in AmE, the distribution of pitch accents depends on information structure, while SgE prosody is edge-based, i.e., prominence is determined primarily by phrasing. Consequently, the prominence of a pronoun depends on its position in a phrase and is not related to information structure. Together, these facts predict that SgE listeners are less sensitive to prominence in computing pronoun reference than AmE listeners. Nevertheless, most SgE individuals have substantial contact with AmE, suggesting that their system may adapt based on the regional identity of the speaker and other cues. In our study, SgE listeners heard sentences which varied in the accentual status of the object pronoun and then chose from paraphrases reflecting different interpretations of the pronoun. Two groups of participants were exposed before and during the experiment to either a “Singaporean” cue or an “American” cue in the form of cover images from popular television series, in such a way that they were not linked to speaker’s identity. Our hypothesis was that if implicit socio-contextual cues bias listeners toward specific systems, then they should show more sensitivity to accentual status in the American condition than in the Singaporean condition. The results confirmed our hypothesis and point towards an exemplar basis for the representation of the intonation-meaning interface.

11h25-11h45 : Emilia Kerr (LPL), Benjamin Morillon (INS), Kristof Strijkers (LPL) : Does prediction drive neural alignment in conversation

Résumé : Recent studies on neural alignment in language (i.e., brain-to-brain synchronisation between interlocutors) have shown that successful communication relies on the synchronization of the same brain regions in both speakers. However, more explicit mechanistic links between neural alignment and specific linguistic functions of the communicative signal remain to be established. This project relies on the hypothesis that the degree of neural synchronization between interlocutors depends on the degree of predictive processing: the more predictability between speaker and listener, the more their brain responses will align and display similar oscillatory dynamics (Pickering & Gambi, 2018). We are testing this hypothesis by isolating word semantics (e.g., animal vs. tool word category) in an experimental set-up where (a) prediction effects are tested at the behavioral level; (b) brain activity (EEG) of two interlocutors engaging in simple conversations is recorded simultaneously and analysed in an event-related fashion (i.e., at the word component level instead of the whole communicative signal). Experiment 1 presents a novel interactional task where participants are involved in an association game where speaker A names a picture (either an animal or a tool) and speaker B needs to respond with a semantically related word. Importantly, the predictability for the upcoming object is manipulated, i.e., prior to picture naming, participants hear either a highly predictable or non-predictable sentence up to the final word, which is then finished by speaker A naming an object. Data has been collected from 20 dyads, and the analyses of speech onsets showed a significant reduction of response latencies in the predictable condition, both for speaker A and speaker B. This demonstrates that semantic predictions influence dyadic interaction. In Experiment 2 (being currently analysed) participants are playing the same association game but without predictive priming, i.e., speaker A sees a picture and names it, and speaker B replies with an association. The relevant factor to explore now is whether we can find meaning-specific brain-to-brain synchronisation between tools vs. animals brain regions, which is the defining dimension by which participants need to perform the task. Importantly, tools vs. animals have well-know cortical dissociations in the brain (e.g., Grisoni et al., 2021). Apart from that, while we have no control about the exact words that an interlocutor will reply, we do control the semantic categories of the words, and therefore, this allows us to explore whether we can find brain-to-brain synchrony for specific word meanings (instead of for ‘language’ in general). The analyses methods that we are currently implementing include Riemannian geometry-based EEG decoding and source localisation. Experiment 3, also a dual EEG set-up, will test the hypothesis that this co-activation will be more in synchrony when semantic predictions have primed the target word.

Grisoni, L., Tomasello, R., Pulvermüller, F. (2021). Correlated Brain Indexes of Semantic Prediction and Prediction Error: Brain Localization and Category Specificity. Cerebral Cortex, 31 (3), 1553–1568.
Pickering, M. J., & Gambi, C. (2018). Predicting while comprehending language: A theory and review. Psychological Bulletin, 144(10), 1002–1044.

11h50-12h10 : Auriane Boudin, Roxane Bertrand, Philippe Blache, Stéphane Rauzy (LPL) : Le feedback dans tous ses états

Résumé : Longtemps relégués au statut d'élément secondaire dans la conversation, les feedbacks sont aujourd'hui reconnus comme jouant un rôle essentiel dans les interactions. Toutefois, de nombreuses interrogations subsistent. Cette présentation aborde le concept de feedback et explore son rôle dans les interactions tant en conditions normales qu'altérées.

12h15-12h30 : clôture/discussion générale

A partir de 12h30 : Bagels offerts par le LPL


24 November 2023, 09h3012h30
LPL, salle de conférences B011