Séminaire Hyong Sil Cho

Hyong Sil CHO (Speech into Language and Data) présentera le 29 novembre prochain de 12h à 14h en salle B011 un séminaire au thème de "Linguistic knowledge in voice assistant technology", suivi d’un temps de discussion autour des perspectives professionnelles dans l'industrie.

Ce séminaire est ouvert à toutes et tous et s’adresse particulièrement aux étudiant.e.s en Master (SCL, MASCO ..), doctorant.e.s et post-doctorant.e.s. (LPL/AMU, Projet COBRA..).

Abstract

Today, most electronic devices are equipped with voice assistant. Smart phones understand our speech and cars speak to us in charming human voices. The general algorithm of such a technology works generally through the steps below:

In order to make the whole process successful, contribution of highly qualified linguists is essential. For example, expertise in phonetics and phonology of a particular language is crucial for building a high-quality acoustic model of the language. In the same way, it would be extremely difficult to create any decent semantic model of a language without qualified knowledge in morpho-syntax and semantics of the language.

In this lecture, the basic mechanism of voice assistant technology and the contribution of linguists to the technology will be illustrated with appropriate examples. In addition to the linguistic knowledge, we will also discuss various other qualities of a competent language expert in the field of speech technology.

 

 Short Bio

Hyong Sil CHO holds a PhD in Linguistics – specialized in phonetics under the supervision of Daniel Hirst at the LPL – and an MBA (in big data and business analytics). She is an Σ!Eureka Independent Technical Expert and member of The European AI Alliance.

Born in Korea, she has studied in France and worked in Belgium, Portugal, China and Germany. After her nomadic life, she is now living in The Netherlands but still works with teams in various countries.

Since 1999, she has contributed to a number of projects in language and speech technology, such as electronic dictionary edition and various R&D in speech synthesis and automatic speech recognition. In 2004, she has started her experience in TTS technology in Scansoft Belgium. In 2008 she joined Microsoft Language Development Centre as language expert of French and Korean, where she has been a Language Experts Team Project Manager for lexicon projects from Sep. 2011 to Sep. 2016.

In October 2016, she opened a company named SiLnD (Speech into Language and Data). Since its opening, SiLnD has been working with world leading companies in information technology, car/automotive industry and artificial intelligence.

Séminaire CoCoDev : Philip Huebner

Séminaire CoCoDev / ILCB

Philip Huebner

(University of Illinos, Urbana-Champaign)

Vendredi 12 novembre 2021

16h En ligne via Zoom

BabyBERTa: Learning More Grammar With Small-Scale Child-Directed Language

Info & inscription (obligatoire)

Séminaire Aurélie Pistono

Séminaire du LPL

15 novembre à 14h, en salle B011

Aurélie Pistono

(Dpt of Experimental Psychology, Ghent University)

Du participant jeune au vieillissement pathologique: que reflètent les disfluences?

Séminaire de l’équipe Systus

Thème : « Autour de la périphérie »

Le séminaire de l’équipe Systus aura lieu ce vendredi 11 juin en mode « hybride », donc en présentiel en salle B011 pour ceux qui le souhaitent et distanciel pour les autres. Si vous souhaitez assister en mode présentiel (membres du LPL) – dans la limite de la jauge actuelle de 26 personnes – merci de contacter Frédéric Sabio, co-responsable de l’équipe : frederic.sabio@univ-amu.fr.

 

14h-14h50 – J. Deulofeu – Conférencier invité (AMU, laboratoire LIF)

Le statut de périphérique et les limites de l’organisation grammaticale en français

 

14h50-15h30 – D. Lewis & S. Herment, L. Leonarduzzi, C. Portes, L. Prévot, F. Sabio, G. Turcsan (équipe Systus)

Périphéries gauche et droite, en français et en anglais

 

15h30-16h10 – C. Aslanov (équipe Systus)

Le tokharien A (Agni), périphérie de la périphérie des langues indo-européennes

 

16h10-16h50 – M. Gasquet-Cyrus (équipe Systus)

Langues et variétés « périphériques » : questions théoriques et idéologiques

 

Contact : Sophie Herment / Frédéric Sabio

Page de l’équipe Systus

Séminaire de l’équipe POP

Séminaire du LPL - équipe POP
Lundi, 15 mars 2021 à 15h30, en ligne

15h30 – 16h30 : Amelia Pettirossi (Laboratoire de Phonétique et Phonologie, Paris) La dysphonie chez les professeures des écoles : perception et représentations


16h30 – 17h00 : Alexia Mattei (LPL) et Annabelle Capel (Hôpital La Conception, Marseille) Les professionnels de la voix : bilan vocal adapté. L’exemple des enseignants

Séminaire de l’équipe Interactions

Séminaire du LPL - équipe Interactions

Vendredi 19 février 2021, 10h30 – 12h00, en ligne

10h30 – 11h30 : Simona Pekarek Doehler, Université de Neuchâtel

Routinisation d’une grammaire-pour-l’interaction : Les trajectoires développementales de ‘je sais pas’ et ‘comment en dit’ en langue seconde

11h30 – 12h00 : Marco Cappellini, LPL

Alignement des procédés d’étayage dans un télétandem

Séminaire de l’équipe Représentations et Dynamiques (REaDY)

10h30-11h10: Chotiga Pattamadilok

From lip- to script-reading: An integrative view of Audio-Visual Associations in language processing (AVA)

During the talk, I will present the general idea of our new ANR project that proposes to explore the relationships between the two main forms of audio-visual association in language processing, i.e., the associations between speech and articulatory gestures and between speech and orthography. Given their distinct properties, these natural and artificial audio-visual associations have been considered as two cognitive processes that are explained by different theoretical models. The present proposal adopts a novel perspective that seeks to establish the missing link between them. The aim is to elaborate a unified framework explaining how different inputs jointly contribute to forming coherent language representations. A new study that we conducted to address this issue will be presented.

11h10-11h50: Amie Fairs

Can we successfully carry out speech production experiments online?

In this age of COVID, more and more psychological experiments need to be carried out online so that data can still be collected. While much research has shown that typical language comprehension studies, such as lexical decision, can be carried out online, to our knowledge there are no online language production studies. Anecdotally, many language production researchers are skeptical about whether online production data are reliable. We sought in this experiment to carry out a typical production study – picture naming – online, and to determine a) whether we could replicate the well-known production effect of word frequency, b) whether the response patterns were similar to a lab-based experiment, and c) whether online-related parameters, such as internet speed, would have an effect on response times or errors. Preliminary data analysis suggests that we can replicate the word frequency effect, yet the distributions of responses and amount of errors are different to lab based experiments. While this analysis is preliminary, this suggests that online production studies are valuable and find similar sized effects to the lab. In addition, in the course of testing this experiment we have learnt a lot of practical information useful for online production studies, which I will discuss.