Systus team seminar: About the “Grande Grammaire du Français”

SYSTUS team seminar

About the “Grande Grammaire du Français” (only in French)

Friday 8 April 2022 – 10.30-12.30 – Room B 011

10.30-11.30 Anne Abeillé, Université de Paris

« La grande grammaire du français : un livre et un projet hors norme »

Book Presentation: http://www.llf.cnrs.fr/ggf

Online access: grandegrammairedufrançais.com

11.30-12.00 Frédéric Sabio, Marie-Noëlle Roubaud, AMU, LPL (Systus team)

« Sur les rapports entre données linguistiques et description syntaxique : l’exemple des relatives en où en français parlé »

12.00-12.30 Discussion about systems and uses

REaDY team seminar: speech production and perception

Thursday, May 5th, 2022

Open discussions with Prof. F. Guenther around LPL members’ presentations

14h-14h40 : Noël Nguyen & Kristof Strijkers : Phonetic and semantic convergence in speech communication

14h40-15h10 : Serge Pinto: Studying speech motor control from its impairment: the cases of hypo- and hyperkinetic dysarthrias

15h10-15h30 : Coffee break

15h30-16h10 : Elin Runnqvist, Lydia Dorokhova & Snezana Todorović : Action monitoring from tongue movements to words

16h10-16h40 : Anne-Sophie Dubarry : Exploring the variability of neurophysiological data during language processing

 

Friday, May 6th, 2022, 10h30-12h00

Keynote by Prof. F. Guenther (Director, Speech Neuroscience Lab, Boston University)

Neurocomputational modeling of speech production

Speech production is a highly complex sensorimotor task involving tightly coordinated processing in the frontal, temporal, and parietal lobes of the cerebral cortex. To better understand these processes, our laboratory has designed, experimentally tested, and iteratively refined a neural network model, called the DIVA model, whose components correspond to the brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After the imitation phase, the model can produce learned phonemes and syllables by generating movements of an articulatory synthesizer. An extended version of the model, called GODIVA, addresses the neural circuitry underlying the buffering and sequencing of phonological units in multi-syllabic utterances. Because the model’s components correspond to neural populations and are given precise anatomical locations, activity in the model’s neurons can be compared directly to neuroimaging data. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during normal and perturbed speech. Furthermore, “damaged” versions of the model are being used to investigate several communication disorders, including stuttering, apraxia of speech, and hypokinetic dysarthria

 

Interactions team seminar: Geert Brône

Geert Brône

(KU Leuven – MIDI Research Group – METLab)

Multimodal strategies in the production and reception of irony in face-to-face interaction

 

Abstract:

Eye gaze has been described as a powerful instrument in social interaction, serving a multitude of functions and displaying particular patterns in relation to speech, gesture and other semiotic resources. Recently developed data collection techniques, including mobile eye-tracking systems, allow us to generate fine-grained information on the gaze orientation of multiple participants simultaneously while they are engaged in spontaneous face-to-face interactions. In this talk, I will zoom in on one set of studies from our lab that provides an illustration of how mobile eye-tracking data may be used for both qualitative and quantitative explorations into the working of the ‘gaze machinery’ in (inter)action. More specifically, I will discuss the complex cognitive-pragmatic phenomenon of irony in interaction. The intrinsic layered nature of irony requires a form of negotiation between speakers and their addressees, in which eye gaze behaviour (along with other nonverbal resources) seems to play a relevant role. A comparison of both speaker and addressee gaze patterns in ironic vs. non-ironic sequences in spontaneous interactions reveals interesting patterns that can be attributed to an increased grounding activity between the participants.

POP team seminar: Marc D. Pell

Seminar

Marc D. Pell

(McGill University, Montréal, Canada)

Prosody as a beacon for understanding a speaker’s stance and intentions

Speech prosody plays an important interpersonal function in human communication, supplying critical details for understanding a speaker’s stance towards their utterance and other people involved in the interaction. For example, listeners use prosody to infer the speaker’s certainty or commitment when making particular speech acts, to decipher whether the speaker holds a positive or negative attitude towards aspects of the communication situation, or simply to recognize that the speaker means to elicit empathy and support by how they vocally express their utterance. In this talk, I will look at some concrete examples of how speech prosody serves as an early “beacon” for understanding the disposition and intended meanings of a speaker during on-line speech processing (based on how listeners interpret irony, requests, and complaints). Behavioural and electrophysiological evidence will be considered.

 

Seminar Marco Cappellini

Seminar

Marco Cappellini  (LPL-AMU)

Thursday 16 décembre 5.30 p.m. - 6.30 p.m. online via Zoom

Ce que l'autonomie peut apporter à la citoyenneté numérique. Une proposition de matrice de séquences pédagogiques (in French)

[What autonomy can bring to digital citizenship. A proposal for a matrix of educational sequences]

Contact : Catherine David

Language contact & Field linguistics seminar of the Master in Language Science (AMU)

Faire le dictionnaire d’une langue minorisée: défis linguistiques, lexicographiques et sociolinguistiques

Seminar of the Master in Language science (AMU, specialization Language contact & Field linguistics (LCT)

Friday 10 December 2021, from 9.00 a.m. to 6 p.m. online via Zoom

Program (in French): https://www.lpl-aix.fr/wp-content/uploads/2021/12/Programme-Séminaire-2021.pdf

Webpage of LCT (in French): https://thelitex.hypotheses.org/lct

Seminar Alexander Martin

Seminar

Alexander Martin

(Laboratoire de Linguistique Formelle, Université de Paris)

Wednesday 8 December 2021

3.00-4.00 p.m. LPL, conference room B011

Studying constraints on language change: a synchronic approach

 

Abstract:

Languages evolve under a large swath of different pressures, but biases in the ways languages are learned and transmitted can explain why certain patterns are so recurrent cross-linguistically.  In this talk, I will present experimental evidence attempting to shed light on the underpinnings of a couple of cross-linguistic regularities.  Specifically, I will review a project on learning biases favouring phonetically-motivated (aka “natural”) rules, focussing on the typologically frequent rule of vowel harmony compared to the formally similar but unattested rule of vowel disharmony (Martin & Peperkamp, 2020; Martin & White, 2021).  I will then discuss the so-called suffixing preference and show evidence that typological regularities may not always find their basis in cognitive constraints (Martin & Culbertson, 2020).  I will then turn to a project looking at the link between individual-level perception and production in language contact by considering the emergence of the phoneme /g/ in European Dutch (Martin et al., in revision) and propose how the methodology used in that project can be expanded to study the time course of contact-induced change.  I will briefly sum up by proposing a dual approach to the study of mechanisms underlying language change that considers biases situated both in the individual and in interaction.