Listening to speech in your native language is easy. Recognizing the words spoken in conversation is generally an automatic and smooth everyday process in the first language (L1). Even in noisy or otherwise less than ideal conditions, performance is surprisingly robust. But anyone who has attempted to follow a conversation in a second language (L2) knows how demanding this can be, even when you know all the words. Even for reasonably clear speech, identifying individual words out of the speech stream is difficult. Charles and Trenkic (2015) reported that international university students missed about 30% of the words they heard during lectures.
For bilinguals, the perceptual processing of L2 speech sounds and the stored representations of the words themselves are influenced by the L1. For example, two words such as "lake" and "rake" may sound the same and may be stored as one word (= one homophonous pronunciation /leik/ for two concepts) for Japanese learners of English, because the /r/-/l/ distinction is difficult to perceive and represent for them due to the lack of this distinction in their L1. One consequence of these effects is the difficulty to know which word to activate when hearing /leik/, but also the difficulty to learn to pronounce the words differently.
Yet, many questions remain as to how bilinguals store the phonological form of words (their pronunciation) in the corresponding lexical entry in long-term memory, and how these representations change over time. Our lab has obtained evidence for dissociations between perception and lexical storage, which suggest that even after perception of a difficult phonological dimension improves, modifying lexical representations that use this phonological dimension remains hard. This means that even after a Japanese learner learns to distinguish /r/ from /l/, their representations of the words might still be the same.
In this talk I will outline research conducted in my lab to understand the phonological structure of the bilingual mental lexicon, how words are stored, and whether (and how) bilinguals are able to update previously inaccurate lexical representations.
ILCB Lunch Talk*, April 26, 2019
Station Marine d’Endoume – 13007 Marseille
Véronique Izard : Integrative Neuroscience and Cognition Center, CNRS & Université Paris Descartes
Antje S. Meyer : Max Planck Institute for Psycholinguistics, Nijmegen The Netherlands
• 11h Véronique Izard : In search for the cognitive foundations of Euclidean geometry
• 12h Antje S. Meyer : Towards processing theories of conversation
• 13h Lunch
Confirm attendance (mandatory) by sending an email to lunchtalks@ilcb.fr
In search for the cognitive foundations of Euclidean geometry
Euclidean geometry has been historically regarded as the most “natural” geometry. Taking inspiration from the flourishing field of numerical cognition, in the past years I have been looking for the cognitive foundations of geometry: Do children, infants, and people without formal education in geometry have access to intuitive concepts that bear some of the content of Euclidean concepts? Results have been mixed. In particular, we found that angle, a central tenant of Euclidean geometry, is not intuitive for children. These results call into question the status of Euclidean geometry as a natural geometry.
Towards processing theories of conversation
Most experimental research into spoken language has focused either on speaking or on listening. However, these processes should also be studied together, not only because they naturally co-occur in conversation and likely affect each other, but also because an integrated research approach can lead to novel insights into the architecture of the cognitive system supporting language use. I will provide an overview of a research program on speaking and listening in dyadic contexts. The starting point is the model of turn-taking in conversation proposed by Levinson and Torreira (2015). Though based exclusively on observational data the model makes strong processing predictions. A key claim is that speakers begin to plan their utterances as early as possible during their interlocutor’s turn, in order to be prepared to respond quickly. Experimental evidence showed that speakers indeed begin to plan their utterances before the end of the preceding turn but, contrary to the prediction, not necessarily as early as possible. Rather than following a fixed rule (“plan as early as possible”) they appear to be quite flexible in their utterance planning. Current work aims at uncovering the factors that limit this flexibility. It appears that, in addition to social and pragmatic factors that define the speaker’s processing goals, capacity limitations arising in different components of the cognitive system play an important role. I will end by discussing how speakers might achieve smooth turn-taking without intensive linguistic dual-tasking.