(Pittsburgh Hearing Research Center, University of Pittsburgh)
Neural systems underlying auditory categorization
Abstract: My program of research uses a systems neuroscience approach to study the computations, maturational constraints, and plasticity underlying behaviorally relevant auditory signals like speech. Speech signals are multidimensional, acoustically variable, and temporally ephemeral. A significant computational challenge in speech perception (and more broadly, audition) is categorization, that is, mapping continuous, multidimensional, and variable acoustic signals into discrete, behavioral equivalence classes. Despite the enormity of this computational challenge, native speech perception is rapid and automatic. In contrast, learning novel speech categories is effortful. In this talk, I elucidate mechanisms underlying how novel speech categories are acquired and represented in the mature brain. I will demonstrate that (1) neural representations of novel speech categories can arise in the associative auditory cortex within a few hundred training trials of sound-to-category training, (2) pre-attentive signal reconstruction in the early auditory system is subject to experience-dependent plasticity, and (3) the robustness of structural and functional connectivity within a sound-to-reward cortico-striatal stream relates to learning outcome. Finally, I will discuss ongoing experiments that leverage neurobiology to design optimal behavioral training and targeted neuromodulation interventions.
About the speaker: Dr. Chandrasekaran serves as a Professor and Vice Chair of Research in the Department of Communication Sciences and Disorders at The University of Pittsburgh. He earned his Ph.D. in Integrative Neuroscience from Purdue University in 2008, completed a postdoctoral fellowship at Northwestern University before joining the University of Texas at Austin in 2010. He is the recipient of Regents’ Outstanding Teaching Award in 2014, the Editor’s award for best research article in the Journal of Speech, Language, and Hearing Research, the Psychonomics Early Career award in 2016, and the Society for Neurobiology of Language Early Career Award in 2018. Dr. Chandrasekaran has served as the Editor-in-Chief of the Journal of Speech, Language, and Hearing Research (Speech). Over the last two decades, his lab has leveraged cutting-edge multimodal neuroimaging methods and computational modeling approaches to develop a sophisticated understanding of how sounds are represented and categorized in the human brain. His approach is highly collaborative and interdisciplinary, integrating across fields of communication sciences and disorders, neuroscience, linguistics, psychology, engineering, and otolaryngology. His laboratory is currently supported by funding from the National Institutes of Health (NIH) and the National Science Foundation (NSF).