A corpus analysis of contrastive subjects in different registers of French: from discourse and syntax to prosody
(Research Foundation – Flanders & KU Leuven)
(Research Foundation – Flanders & KU Leuven)
Friday June 17th 2022
9.30- 4.00 LPL, conference room B011
We will welcome Serge Bouchardon (UTC Compiègne) for a presentation about "The interactive stories".
Please contact christelle.COMBE@univ-amu.fr for any questions about this hybrid event.
Link to the program
(Pittsburgh Hearing Research Center, University of Pittsburgh)
Abstract: My program of research uses a systems neuroscience approach to study the computations, maturational constraints, and plasticity underlying behaviorally relevant auditory signals like speech. Speech signals are multidimensional, acoustically variable, and temporally ephemeral. A significant computational challenge in speech perception (and more broadly, audition) is categorization, that is, mapping continuous, multidimensional, and variable acoustic signals into discrete, behavioral equivalence classes. Despite the enormity of this computational challenge, native speech perception is rapid and automatic. In contrast, learning novel speech categories is effortful. In this talk, I elucidate mechanisms underlying how novel speech categories are acquired and represented in the mature brain. I will demonstrate that (1) neural representations of novel speech categories can arise in the associative auditory cortex within a few hundred training trials of sound-to-category training, (2) pre-attentive signal reconstruction in the early auditory system is subject to experience-dependent plasticity, and (3) the robustness of structural and functional connectivity within a sound-to-reward cortico-striatal stream relates to learning outcome. Finally, I will discuss ongoing experiments that leverage neurobiology to design optimal behavioral training and targeted neuromodulation interventions.
About the speaker: Dr. Chandrasekaran serves as a Professor and Vice Chair of Research in the Department of Communication Sciences and Disorders at The University of Pittsburgh. He earned his Ph.D. in Integrative Neuroscience from Purdue University in 2008, completed a postdoctoral fellowship at Northwestern University before joining the University of Texas at Austin in 2010. He is the recipient of Regents’ Outstanding Teaching Award in 2014, the Editor’s award for best research article in the Journal of Speech, Language, and Hearing Research, the Psychonomics Early Career award in 2016, and the Society for Neurobiology of Language Early Career Award in 2018. Dr. Chandrasekaran has served as the Editor-in-Chief of the Journal of Speech, Language, and Hearing Research (Speech). Over the last two decades, his lab has leveraged cutting-edge multimodal neuroimaging methods and computational modeling approaches to develop a sophisticated understanding of how sounds are represented and categorized in the human brain. His approach is highly collaborative and interdisciplinary, integrating across fields of communication sciences and disorders, neuroscience, linguistics, psychology, engineering, and otolaryngology. His laboratory is currently supported by funding from the National Institutes of Health (NIH) and the National Science Foundation (NSF).
San Diego State University
Location: Conference room B011 at the LPL, 5 avenue Pasteur, Aix-en-Provence
Date: 24/05 at 11 a.m. (Zoom link will be available soon)
Page Web : https://www.zedsehyr.com/publications.html
Monday, February 28 2022
ILCB / CoCoDev seminar
(Donders Institute for Brain, Cognition, and Behavior / Max Planck Institute for Psycholinguistics)
Extending the language architecture: Evidence from multimodal language use, processing and acquisition
Online from 12 a.m.
Infos & Zoom link: https://cocodev1.gitlab.io/website/seminars/
Friday 8 April 2022 – 10.30-12.30 – Room B 011
« La grande grammaire du français : un livre et un projet hors norme »
Book Presentation: http://www.llf.cnrs.fr/ggf
Online access: grandegrammairedufrançais.com
« Sur les rapports entre données linguistiques et description syntaxique : l’exemple des relatives en où en français parlé »
Thursday, May 5th, 2022
Open discussions with Prof. F. Guenther around LPL members’ presentations
14h-14h40 : Noël Nguyen & Kristof Strijkers : Phonetic and semantic convergence in speech communication
14h40-15h10 : Serge Pinto: Studying speech motor control from its impairment: the cases of hypo- and hyperkinetic dysarthrias
15h10-15h30 : Coffee break
15h30-16h10 : Elin Runnqvist, Lydia Dorokhova & Snezana Todorović : Action monitoring from tongue movements to words
16h10-16h40 : Anne-Sophie Dubarry : Exploring the variability of neurophysiological data during language processing
Friday, May 6th, 2022, 10h30-12h00
Keynote by Prof. F. Guenther (Director, Speech Neuroscience Lab, Boston University)
Neurocomputational modeling of speech production
Speech production is a highly complex sensorimotor task involving tightly coordinated processing in the frontal, temporal, and parietal lobes of the cerebral cortex. To better understand these processes, our laboratory has designed, experimentally tested, and iteratively refined a neural network model, called the DIVA model, whose components correspond to the brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After the imitation phase, the model can produce learned phonemes and syllables by generating movements of an articulatory synthesizer. An extended version of the model, called GODIVA, addresses the neural circuitry underlying the buffering and sequencing of phonological units in multi-syllabic utterances. Because the model’s components correspond to neural populations and are given precise anatomical locations, activity in the model’s neurons can be compared directly to neuroimaging data. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during normal and perturbed speech. Furthermore, “damaged” versions of the model are being used to investigate several communication disorders, including stuttering, apraxia of speech, and hypokinetic dysarthria
(KU Leuven – MIDI Research Group – METLab)
Eye gaze has been described as a powerful instrument in social interaction, serving a multitude of functions and displaying particular patterns in relation to speech, gesture and other semiotic resources. Recently developed data collection techniques, including mobile eye-tracking systems, allow us to generate fine-grained information on the gaze orientation of multiple participants simultaneously while they are engaged in spontaneous face-to-face interactions. In this talk, I will zoom in on one set of studies from our lab that provides an illustration of how mobile eye-tracking data may be used for both qualitative and quantitative explorations into the working of the ‘gaze machinery’ in (inter)action. More specifically, I will discuss the complex cognitive-pragmatic phenomenon of irony in interaction. The intrinsic layered nature of irony requires a form of negotiation between speakers and their addressees, in which eye gaze behaviour (along with other nonverbal resources) seems to play a relevant role. A comparison of both speaker and addressee gaze patterns in ironic vs. non-ironic sequences in spontaneous interactions reveals interesting patterns that can be attributed to an increased grounding activity between the participants.
(McGill University, Montréal, Canada)
Speech prosody plays an important interpersonal function in human communication, supplying critical details for understanding a speaker’s stance towards their utterance and other people involved in the interaction. For example, listeners use prosody to infer the speaker’s certainty or commitment when making particular speech acts, to decipher whether the speaker holds a positive or negative attitude towards aspects of the communication situation, or simply to recognize that the speaker means to elicit empathy and support by how they vocally express their utterance. In this talk, I will look at some concrete examples of how speech prosody serves as an early “beacon” for understanding the disposition and intended meanings of a speaker during on-line speech processing (based on how listeners interpret irony, requests, and complaints). Behavioural and electrophysiological evidence will be considered.