CBRU Seminar series

Advances in the Neurocognition of Word Learning

Learning the meaning of new words is an intriguing task that continues to attract interest from a broad spectrum of disciplines, including education, linguistics, psychology, and neuroscience. The reason for such widespread interest is possibly grounded in the multifaceted nature of the perceptual and cognitive functions involved. In fact, to rapidly acquire new words, learners need to engage phonetic discrimination skills, speech segmentation abilities as well as verbal and associative memory functions. In my talk, I will present a series of EEG experiments, and provide new insights into the neural machinery underlying speech segmentation (Experiment 1), associative word learning (Experiment 2), and prediction-based processes during associative word learning (Experiment 3). In particular, in Experiment 1 we examined whether during speech segmentation neural synchronization to pertinent speech units (syllables and words) likewise operates in statistical learning and prosodic bootstrapping conditions. In Experiment 2, we used a novel associative word learning paradigm to disentangle learning-specific and unspecific ERP manifestations along the anterior-posterior topographical axis. Furthermore, we examined the performance-dependent neural underpinnings underlying associative word learning, and addressed relationships between word learning performance, verbal memory capacity, auditory attention functions, phonetic discrimination skills and musicality. Finally, in Experiment 3 we compared neural indices of word pre-activation during associative word learning between a learning condition with maximal prediction likelihood and a non-learning control condition with high prediction error.

Playing music in the scanner: Neural bases of piano performance

Over the past 30 years, research on the neurocognition of music has gained a lot of insights into how the brain perceives music. Yet, our knowledge about the neural mechanisms of music production remains sparse. Particularly little is known about how we make music together. The present talk will focus on audio-motor mechanisms allowing duetting pianists to flexibly anticipate and adapt to their partner’s performance, tested with 3T fMRI. The data suggest (A) that pianists activate motor knowledge of the other’s actions in cortical and cerebellar motor regions, which (B) fine-tunes the detection of temporal discrepancies between duo partners in the cerebellum, and (C) correlates with their readiness to adapt to the partner’s actions or not. Altogether, these results highlight the relevance of cortico-cerebellar loops for audio-motor integration during joint action extending the framework of ‘internal models’ from solo to ensemble performance.

More about the speaker

Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics Frankfurt/Main, Germany.

Precision-weighting of auditory prediction errors: An empirical support from two musical studies

Perceiving and keeping track of auditory changes in the external environment is a core process in crucial daily-life abilities, from noticing threats in the world to the perception and appreciation of music. According to the theory of predictive coding (PC), this process entails a continuous optimisation of an internal model of the sound environment through unpredicted events, instantiated in the brain by “neural prediction error signals”. An accurate internal model can generate more precise predictions of the upcoming sensory input, promoting a more rapid reaction to associated environmental changes. A core assumption of PC is that the brain weights neural prediction errors by their reliability: in noisy (or perceptually complex) environments, prediction errors are less reliable and will be down-weighted, thus contributing less to an internal model’s revision. This process is thought to be implemented by changes in the synaptic gain (or sensitivity) of superficial pyramidal cells of the auditory cortex. In this talk, I will present two works, one EEG experiment with rhythmic patterns (Lumaca et al., 2019) and one fMRI experiment with melodic patterns (Lumaca, Dietz, et al., 2020), that empirically support the precision-weighting hypothesis in the domain of music.

The role of dopaminergic and reward-related circuits in language learning and music memory

In a series of behavioral and neuroimaging experiments, we showed that humans—even in absence of explicit feedback—can experience pleasure from language-learning itself. Specifically, new-word learning from context (i.e., reading), triggered an intrinsic reward signal that modulated long-term memory for newly-learned words via the activation of the dopaminergic midbrain. Using a pharmacological intervention, we recently showed that dopamine plays indeed a causal role in this process. We now extend these results to the music domain, showing that long term memory for newly-learned songs depends on the rewarding value of the song itself and on dopaminergic transmission.

More about the speaker

The Ripolles Lab in the Music and Audio Research Laboratory, Department of Psychology, New York University, USA.