CBRU Seminar series

Comprehending speech: from syllables to narratives

In everyday situations, one typical use of speech (and language, in general) is to convey narratives. Here narrative denotes an unbroken block of text of at least a few (possibly many) sentences linked together by some common topic (e.g., when one tells what he/she did yesterday, or gives a scientific talk). The common theme of the talk is how we process and represent this common topic, the “context”, which holds together the words and sentences of the narrative.

In the first two experiments, speech was manipulated in a two-speaker cocktail party situation so that either the narrative was intact, or the words or syllables were scrambled within the sentences of ca. 5 minutes long narratives (with Hungarian phonotactic rules and sentence prosody retained), thus creating either gobbledegooks (syllable-scrambled speech), or word-salads (word-scrambled speech). Listeners performed a detection task (pressing a response key to coughs) on the designated speech stream. The manipulated speech could appear in both speech streams, in the target, or in the distractor speech stream. EEG was analyzed for target- and distractor-related ERPs and for functional networks. The results are interpreted within the framework of linguistic predictability.

In the third study, participants listened to four different narratives of ca. 5 minutes duration, each, performing a numeral detection, and, in parallel a memory task. EEG was analyzed to assess whether there are functional connections in the brain related to the specific narratives and whether these connections differ between different narratives. The results are discussed in terms of language comprehension models.

More about the speaker

Sound and Speech Perception Research Group, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary.

Advances in the Neurocognition of Word Learning

Learning the meaning of new words is an intriguing task that continues to attract interest from a broad spectrum of disciplines, including education, linguistics, psychology, and neuroscience. The reason for such widespread interest is possibly grounded in the multifaceted nature of the perceptual and cognitive functions involved. In fact, to rapidly acquire new words, learners need to engage phonetic discrimination skills, speech segmentation abilities as well as verbal and associative memory functions. In my talk, I will present a series of EEG experiments, and provide new insights into the neural machinery underlying speech segmentation (Experiment 1), associative word learning (Experiment 2), and prediction-based processes during associative word learning (Experiment 3). In particular, in Experiment 1 we examined whether during speech segmentation neural synchronization to pertinent speech units (syllables and words) likewise operates in statistical learning and prosodic bootstrapping conditions. In Experiment 2, we used a novel associative word learning paradigm to disentangle learning-specific and unspecific ERP manifestations along the anterior-posterior topographical axis. Furthermore, we examined the performance-dependent neural underpinnings underlying associative word learning, and addressed relationships between word learning performance, verbal memory capacity, auditory attention functions, phonetic discrimination skills and musicality. Finally, in Experiment 3 we compared neural indices of word pre-activation during associative word learning between a learning condition with maximal prediction likelihood and a non-learning control condition with high prediction error.

Playing music in the scanner: Neural bases of piano performance

Over the past 30 years, research on the neurocognition of music has gained a lot of insights into how the brain perceives music. Yet, our knowledge about the neural mechanisms of music production remains sparse. Particularly little is known about how we make music together. The present talk will focus on audio-motor mechanisms allowing duetting pianists to flexibly anticipate and adapt to their partner’s performance, tested with 3T fMRI. The data suggest (A) that pianists activate motor knowledge of the other’s actions in cortical and cerebellar motor regions, which (B) fine-tunes the detection of temporal discrepancies between duo partners in the cerebellum, and (C) correlates with their readiness to adapt to the partner’s actions or not. Altogether, these results highlight the relevance of cortico-cerebellar loops for audio-motor integration during joint action extending the framework of ‘internal models’ from solo to ensemble performance.

More about the speaker

Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics Frankfurt/Main, Germany.

Precision-weighting of auditory prediction errors: An empirical support from two musical studies

Perceiving and keeping track of auditory changes in the external environment is a core process in crucial daily-life abilities, from noticing threats in the world to the perception and appreciation of music. According to the theory of predictive coding (PC), this process entails a continuous optimisation of an internal model of the sound environment through unpredicted events, instantiated in the brain by “neural prediction error signals”. An accurate internal model can generate more precise predictions of the upcoming sensory input, promoting a more rapid reaction to associated environmental changes. A core assumption of PC is that the brain weights neural prediction errors by their reliability: in noisy (or perceptually complex) environments, prediction errors are less reliable and will be down-weighted, thus contributing less to an internal model’s revision. This process is thought to be implemented by changes in the synaptic gain (or sensitivity) of superficial pyramidal cells of the auditory cortex. In this talk, I will present two works, one EEG experiment with rhythmic patterns (Lumaca et al., 2019) and one fMRI experiment with melodic patterns (Lumaca, Dietz, et al., 2020), that empirically support the precision-weighting hypothesis in the domain of music.

The role of dopaminergic and reward-related circuits in language learning and music memory

In a series of behavioral and neuroimaging experiments, we showed that humans—even in absence of explicit feedback—can experience pleasure from language-learning itself. Specifically, new-word learning from context (i.e., reading), triggered an intrinsic reward signal that modulated long-term memory for newly-learned words via the activation of the dopaminergic midbrain. Using a pharmacological intervention, we recently showed that dopamine plays indeed a causal role in this process. We now extend these results to the music domain, showing that long term memory for newly-learned songs depends on the rewarding value of the song itself and on dopaminergic transmission.

More about the speaker

The Ripolles Lab in the Music and Audio Research Laboratory, Department of Psychology, New York University, USA.