Seminar of Jorina Brysbaert

A corpus analysis of contrastive subjects in different registers of French: from discourse and syntax to prosody

Jorina Brysbaert

(Research Foundation – Flanders & KU Leuven)

Seminar “Multimodal Interaction Through Screens”

Friday June 17th 2022

Seminar

Multimodal Interaction Through Screens (IMPEC)

9.30- 4.00 LPL, conference room B011

We will welcome Serge Bouchardon (UTC Compiègne) for a presentation about "The interactive stories".

Please contact christelle.COMBE@univ-amu.fr for any questions about this hybrid event.

Link to the program

ILCB seminar: Bharath Chandrasekaran

ILCB seminar

Bharath Chandrasekaran

Neural systems underlying auditory categorization 

Abstract: My program of research uses a systems neuroscience approach to study the computations, maturational constraints, and plasticity underlying behaviorally relevant auditory signals like speech. Speech signals are multidimensional, acoustically variable, and temporally ephemeral. A significant computational challenge in speech perception (and more broadly, audition) is categorization, that is, mapping continuous, multidimensional, and variable acoustic signals into discrete, behavioral equivalence classes. Despite the enormity of this computational challenge, native speech perception is rapid and automatic. In contrast, learning novel speech categories is effortful. In this talk, I elucidate mechanisms underlying how novel speech categories are acquired and represented in the mature brain. I will demonstrate that (1) neural representations of novel speech categories can arise in the associative auditory cortex within a few hundred training trials of sound-to-category training, (2) pre-attentive signal reconstruction in the early auditory system is subject to experience-dependent plasticity, and (3) the robustness of structural and functional connectivity within a sound-to-reward cortico-striatal stream relates to learning outcome. Finally, I will discuss ongoing experiments that leverage neurobiology to design optimal behavioral training and targeted neuromodulation interventions.

About the speaker: Dr. Chandrasekaran serves as a Professor and Vice Chair of Research in the Department of Communication Sciences and Disorders at The University of Pittsburgh. He earned his Ph.D. in Integrative Neuroscience from Purdue University in 2008, completed a postdoctoral fellowship at Northwestern University before joining the University of Texas at Austin in 2010. He is the recipient of Regents’ Outstanding Teaching Award in 2014, the Editor’s award for best research article in the Journal of Speech, Language, and Hearing Research, the Psychonomics Early Career award in 2016, and the Society for Neurobiology of Language Early Career Award in 2018. Dr. Chandrasekaran has served as the Editor-in-Chief of the Journal of Speech, Language, and Hearing Research (Speech). Over the last two decades, his lab has leveraged cutting-edge multimodal neuroimaging methods and computational modeling approaches to develop a sophisticated understanding of how sounds are represented and categorized in the human brain. His approach is highly collaborative and interdisciplinary, integrating across fields of communication sciences and disorders, neuroscience, linguistics, psychology, engineering, and otolaryngology. His laboratory is currently supported by funding from the National Institutes of Health (NIH) and the National Science Foundation (NSF).

Seminar Polina Ukhova

Friday 6 May 2022

Seminar

Polina Ukhova

(ATER AMU-LPL)

2 p.m. LPL, room B011

Dynamiques du parler jeune : le cas d'étudiants russes et français
Show More

Abstract (in French):

La présente étude menée sur corpus porte sur le parler des jeunes russes et français de 18 à 23 ans inscrits dans des facultés de « lettres et langues » à l’Université de Poitiers et à l’Université de Iaroslavl. À ce jour, de nombreux travaux ont été effectués sur le français des banlieues, mais très peu de linguistes se sont intéressés aux pratiques non-standard propres aux jeunes étudiants.

Le corpus de travail utilisé dans le cadre de cette étude comprend 4 sous-corpus (a) oraux (105h d'enregistrement de conversation en milieu naturel, ainsi que de conversation publique lors d’émissions de radio « jeunes ») et (b) écrits (34355 mots ; constitués d’occurrences relevées dans les réseaux sociaux) et représente une ressource bilingue de corpus comparables en français et en russe.

Ce travail propose une analyse contrastive pluridimensionnelle des données langagières spontanées (orales et écrites), s’inscrit dans le domaine de l’articulation sémantique/syntaxe/pragmatique et comprend trois volets :

Premièrement, il fournit une analyse des moyens linguistiques (lexicaux, morpho-sémantiques et graphiques) utilisés pour l’expression de contenus émotivo-évaluatifs, trait fondamental des discours spontanés entre jeunes.

Deuxièmement, il propose une typologie de procédés de création lexicale qui permet de systématiser les particularités sémantico-structurelles et syntaxico-pragmatiques les plus caractéristiques du parler des jeunes français et russes (notamment, l’emploi des marqueurs discursifs, leurs associations, fréquence et substituabilité).

Troisièmement, les rapprochements entre les énoncés oraux et écrits présents dans le corpus permettent de constater que le numérique a fait apparaître de nouvelles conditions de communication ce qui a fait émerger de nouvelles stratégies communicatives. Aujourd’hui, les échanges ont un caractère multimodal qui se décèle surtout chez les jeunes, utilisateurs natifs de la « réalité numérique ». L’analyse des discours en face à face et des discours médiés par ordinateur montre que le « numérique » et le « non-numérique » sont en co-construction permanente et ne sont pas séparables : les outils linguistiques mobilisés dans l’un et l’autre environnement sont les mêmes et posent de façon nouvelle les réflexions sur l’opposition traditionnelle oral/écrit.

Par ailleurs, la dimension contrastive de l’étude met en évidence que, si certains traits restent propres à chacune des deux populations car les jeunes sont évidemment porteurs de différentes cultures nationales, on voit très nettement l’impact des réseaux sociaux qui fondent un espace culturel global où les jeunes de différents pays du monde sont devenus porteurs d’un imaginaire collectif et créent ensemble des codes communs qui permettent des échanges multiculturels d’un nouveau type.

Systus team seminar: About the “Grande Grammaire du Français”

SYSTUS team seminar

About the “Grande Grammaire du Français” (only in French)

Friday 8 April 2022 – 10.30-12.30 – Room B 011

10.30-11.30 Anne Abeillé, Université de Paris

« La grande grammaire du français : un livre et un projet hors norme »

Book Presentation: http://www.llf.cnrs.fr/ggf

Online access: grandegrammairedufrançais.com

11.30-12.00 Frédéric Sabio, Marie-Noëlle Roubaud, AMU, LPL (Systus team)

« Sur les rapports entre données linguistiques et description syntaxique : l’exemple des relatives en où en français parlé »

12.00-12.30 Discussion about systems and uses

REaDY team seminar: speech production and perception

Thursday, May 5th, 2022

Open discussions with Prof. F. Guenther around LPL members’ presentations

14h-14h40 : Noël Nguyen & Kristof Strijkers : Phonetic and semantic convergence in speech communication

14h40-15h10 : Serge Pinto: Studying speech motor control from its impairment: the cases of hypo- and hyperkinetic dysarthrias

15h10-15h30 : Coffee break

15h30-16h10 : Elin Runnqvist, Lydia Dorokhova & Snezana Todorović : Action monitoring from tongue movements to words

16h10-16h40 : Anne-Sophie Dubarry : Exploring the variability of neurophysiological data during language processing

 

Friday, May 6th, 2022, 10h30-12h00

Keynote by Prof. F. Guenther (Director, Speech Neuroscience Lab, Boston University)

Neurocomputational modeling of speech production

Speech production is a highly complex sensorimotor task involving tightly coordinated processing in the frontal, temporal, and parietal lobes of the cerebral cortex. To better understand these processes, our laboratory has designed, experimentally tested, and iteratively refined a neural network model, called the DIVA model, whose components correspond to the brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After the imitation phase, the model can produce learned phonemes and syllables by generating movements of an articulatory synthesizer. An extended version of the model, called GODIVA, addresses the neural circuitry underlying the buffering and sequencing of phonological units in multi-syllabic utterances. Because the model’s components correspond to neural populations and are given precise anatomical locations, activity in the model’s neurons can be compared directly to neuroimaging data. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during normal and perturbed speech. Furthermore, “damaged” versions of the model are being used to investigate several communication disorders, including stuttering, apraxia of speech, and hypokinetic dysarthria

 

Interactions team seminar: Geert Brône

Geert Brône

(KU Leuven – MIDI Research Group – METLab)

Multimodal strategies in the production and reception of irony in face-to-face interaction

 

Abstract:

Eye gaze has been described as a powerful instrument in social interaction, serving a multitude of functions and displaying particular patterns in relation to speech, gesture and other semiotic resources. Recently developed data collection techniques, including mobile eye-tracking systems, allow us to generate fine-grained information on the gaze orientation of multiple participants simultaneously while they are engaged in spontaneous face-to-face interactions. In this talk, I will zoom in on one set of studies from our lab that provides an illustration of how mobile eye-tracking data may be used for both qualitative and quantitative explorations into the working of the ‘gaze machinery’ in (inter)action. More specifically, I will discuss the complex cognitive-pragmatic phenomenon of irony in interaction. The intrinsic layered nature of irony requires a form of negotiation between speakers and their addressees, in which eye gaze behaviour (along with other nonverbal resources) seems to play a relevant role. A comparison of both speaker and addressee gaze patterns in ironic vs. non-ironic sequences in spontaneous interactions reveals interesting patterns that can be attributed to an increased grounding activity between the participants.

POP team seminar: Marc D. Pell

Seminar

Marc D. Pell

(McGill University, Montréal, Canada)

Prosody as a beacon for understanding a speaker’s stance and intentions

Speech prosody plays an important interpersonal function in human communication, supplying critical details for understanding a speaker’s stance towards their utterance and other people involved in the interaction. For example, listeners use prosody to infer the speaker’s certainty or commitment when making particular speech acts, to decipher whether the speaker holds a positive or negative attitude towards aspects of the communication situation, or simply to recognize that the speaker means to elicit empathy and support by how they vocally express their utterance. In this talk, I will look at some concrete examples of how speech prosody serves as an early “beacon” for understanding the disposition and intended meanings of a speaker during on-line speech processing (based on how listeners interpret irony, requests, and complaints). Behavioural and electrophysiological evidence will be considered.