CBRU Seminar series

In 2020-2023, the CBRU online seminar series hosted internationally acclaimed speakers from all over the world.
Johan Mårtensson

Virtual Language Learning

Traditional word learning is focused on association between new words for an object and earlier semantic representations. We piggyback on these associations to learn fast and efficiently, but are we missing something? I will present the framework behind investigations being done at Lund University (along with our collaborators at PSU and PolyU) into vocabulary learning using Virtual Reality, along with ongoing experiments which aim to investigate whether we benefit from using our body as well as our minds when we try to learn new material.

More about the speaker

Johan's Research Portal profile, Lund University, Sweden

Carles Escera

Neural encoding of speech sounds in neonates and infants as revealed with frequency-following responses

Infants master their native language with remarkable ease, following a common developmental trajectory across different languages and cultures. There is ample consensus on critical behavioral attainments at given time points during development, such as cooing (1-4 months), babbling (6-10 months) and uttering the first words (12 months). Yet, the underlying neural underpinnings of these language attainments are poorly understood. The acquisition of spoken language requires a sophisticated neural machinery to disentangle the fine-grained spectro-temporal acoustic features differentiating speech sounds. This neural machinery is partially functional in utero, from the 27th gestational week, and keeps its natural maturation processes under genetic, biological, nutritional and environmental influences. From the very same moment of birth, the baby is exposed to a much richer acoustic environment (the mother’s bomb behaves as low-pass filter), fostering rapid experience-dependent plastic changes in the neural encoding of complex sound features, that I will argue, support early language acquisition.

In my talk I will discuss the results of a series of studies carried out in my laboratory with the Frequency-Following Response (FFR), a non-invasive scalp-recorded auditory evoked potential that reflects compound phase-locked neural activity elicited to the spectrotemporal components of the acoustic signal, along the entire auditory hierarchy. These studies have so-far allowed us to establish the standards for recording the neonatal FFR in a hospital routine, to show that fundamental frequency (F0) encoding is adult-like at birth whereas temporal-fine structure encoding shows a striking maturation at the age of one month, to continue to develop up to the age of 6 months, and that fetal conditions challenging normal fetal growth, such as fetal growth restriction of fetal overweight, result in compromised neural encoding of F0 at birth. Furthermore, our results show that the prenatal exposure to environmental music –and to a mono/bilingual acoustic environment- during pregnancy, fosters the neural encoding of speech sounds (F0) at birth. Altogether, these result support the FFR as a powerful to investigate the neural underpinnings of early language acquisition.

More about the speaker

BrainLab, Institute of Neurosciences, University of Barcelona, Spain

Jaana Simola

Self-generated thought and attentional decoupling

In the past 20 years, mind-wandering and spontaneous thought have become prominent topics in cognitive psychology and neuroscience. Until then cognitive neuroscience was dominated by a task-centric view of mental processes. Understanding the neural systems that support different patterns of thought has become a prominent goal of cognitive neuroscience. In this talk, I will first outline research that has been influential in characterizing mind wandering and its dynamics. Second, I will present data from a study in which participants' electroencephalogram (EEG) recording was combined with multidimensional experience sampling (MDES) during a task that investigated mind wandering under varying cognitive demand.

More about the speaker

Jaana's Research Portal profile, University of Helsinki, Finland

César Lima

Does Music Training Enhance Vocal Emotional Processing?

Over the past two decades, there has been widespread interest in the idea that music training enhances nonmusical abilities. Debates on transfer of learning remain contentious, however, and most work to date has focused on music training effects on linguistic processing and on domain-general cognitive abilities such as IQ. Much less is known about potential transfer from music to socioemotional skills, even though social and emotional processes are central to many musical activities. In this talk, I will present a series of studies examining the role of music training and musical abilities on emotion recognition in voices and faces. The data show that musically trained adults perform better than untrained ones at recognizing emotions in emotional speech prosody (‘tone of voice’) and in purely nonverbal vocalizations, such as laughter and crying. This advantage is observed when vocal expressions are intact and when sensory/acoustic information is limited (gating paradigm), but it does not extend to the visual modality, for the recognition of facial expressions. Importantly, converging correlational and longitudinal data raise doubts about the causal role of music training in explaining the musicians’ advantage in vocal emotional processing: (1) adults with ‘naturally’ good musical abilities show enhanced performance at recognizing vocal emotions regardless of their music training, indicating that training itself is not necessary to improve vocal emotional processing; and (2) in a longitudinal study with children, we found that music training improved auditory-perceptual and motor abilities, but not vocal and facial emotional processing. Altogether, these findings indicate that music training can be associated with enhanced emotion recognition in the auditory modality, but there is currently no evidence that such enhancements reflect experience-dependent plasticity. Instead, we document an important role for factors other than music training (e.g., predispositions) that should be considered when discussing associations between musical and nonmusical domains.

More about the speaker

Communication, Emotion & Brain (CEB), Iscte - University Institute of Lisbon, Portugal

Gabor Csifcsak

Towards more reliable and transparent research: Pre-registrations and Registered Reports

Low replication rates in psychology and neuroscience can undermine the reliability and credibility of our work as researchers, and of our scientific field as a whole. Several factors contribute to the problem, such as issues with study design, data analysis, result interpretation, journal policies, and the pressure for publication to secure research funding. With respect to research that is confirmatory in nature, some of these pitfalls can be avoided by specifying hypotheses, study details and analytical methods before the start of data collection, via creating “pre-registrations” on public platforms. Moving one step further, researchers can submit their detailed study plans to journals publishing “registered reports” for an initial round of peer-review, which can lead to guaranteed publication once the data is collected, analyzed and discussed as initially planned, irrespective of the results. In the talk, I will provide a general introduction to these two, relatively new approaches to documenting and reporting research, and highlight issues from our experience.

Elvira Brattico

Brain predictive coding processes are associated to COMT gene Val158Met polymorphism

When listening to sounds, the ability of the human brain to predict them based on prior experience is crucial for their understanding and appreciation. This ability varies greatly between individuals according to the tangled interplay between neurophysiology, genetics and biology. Even though it is established that such predictive processes and their variation can be indexed by neural error responses with electroencephalography (EEG) and magnetoencephalography (MEG) methods, only few studies have tracked down auditory predicting processes to genetic mutations.

In a first MEG study, we examined the mismatch responses (MMN) to deviant stimuli in healthy participants carrying different variants of Val158Met single-nucleotide polymorphism (SNP) within the catechol-O-methyltransferase (COMT) gene, responsible for the majority of catecholamines degradation (esp. dopamine and serotonin) in the prefrontal cortex. Results showed significant amplitude enhancement of pre- diction error responses originating from the inferior frontal gyrus, superior and middle temporal cortices in heterozygous genotype carriers (Val/Met) vs homozygous (Val/Val and Met/Met) carriers.

In a second MEG study, we further revealed the role of the Val66Met genetic SNP of the brain derived neurotrophic factor (BDNF), regulating synaptogenesis and explaining variance in serotonin levels, in the auditory-cortex neuroplasticity of musicians, as indexed by the MMN. A third MEG study extended these findings to oscillatory scale-free dynamics and inter-areal synchronization, indicating the effects of gene-determined catecholamine levels in regulating communication between neural networks, and hence cognition.

István Winkler

Comprehending speech: from syllables to narratives

In everyday situations, one typical use of speech (and language, in general) is to convey narratives. Here narrative denotes an unbroken block of text of at least a few (possibly many) sentences linked together by some common topic (e.g., when one tells what he/she did yesterday, or gives a scientific talk). The common theme of the talk is how we process and represent this common topic, the “context”, which holds together the words and sentences of the narrative.

In the first two experiments, speech was manipulated in a two-speaker cocktail party situation so that either the narrative was intact, or the words or syllables were scrambled within the sentences of ca. 5 minutes long narratives (with Hungarian phonotactic rules and sentence prosody retained), thus creating either gobbledegooks (syllable-scrambled speech), or word-salads (word-scrambled speech). Listeners performed a detection task (pressing a response key to coughs) on the designated speech stream. The manipulated speech could appear in both speech streams, in the target, or in the distractor speech stream. EEG was analyzed for target- and distractor-related ERPs and for functional networks. The results are interpreted within the framework of linguistic predictability.

In the third study, participants listened to four different narratives of ca. 5 minutes duration, each, performing a numeral detection, and, in parallel a memory task. EEG was analyzed to assess whether there are functional connections in the brain related to the specific narratives and whether these connections differ between different narratives. The results are discussed in terms of language comprehension models.

More about the speaker

Sound and Speech Perception Research Group, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary.

Stefan Elmer

Advances in the Neurocognition of Word Learning

Learning the meaning of new words is an intriguing task that continues to attract interest from a broad spectrum of disciplines, including education, linguistics, psychology, and neuroscience. The reason for such widespread interest is possibly grounded in the multifaceted nature of the perceptual and cognitive functions involved. In fact, to rapidly acquire new words, learners need to engage phonetic discrimination skills, speech segmentation abilities as well as verbal and associative memory functions. In my talk, I will present a series of EEG experiments, and provide new insights into the neural machinery underlying speech segmentation (Experiment 1), associative word learning (Experiment 2), and prediction-based processes during associative word learning (Experiment 3). In particular, in Experiment 1 we examined whether during speech segmentation neural synchronization to pertinent speech units (syllables and words) likewise operates in statistical learning and prosodic bootstrapping conditions. In Experiment 2, we used a novel associative word learning paradigm to disentangle learning-specific and unspecific ERP manifestations along the anterior-posterior topographical axis. Furthermore, we examined the performance-dependent neural underpinnings underlying associative word learning, and addressed relationships between word learning performance, verbal memory capacity, auditory attention functions, phonetic discrimination skills and musicality. Finally, in Experiment 3 we compared neural indices of word pre-activation during associative word learning between a learning condition with maximal prediction likelihood and a non-learning control condition with high prediction error.

Daniela Sammler

Playing music in the scanner: Neural bases of piano performance

Over the past 30 years, research on the neurocognition of music has gained a lot of insights into how the brain perceives music. Yet, our knowledge about the neural mechanisms of music production remains sparse. Particularly little is known about how we make music together. The present talk will focus on audio-motor mechanisms allowing duetting pianists to flexibly anticipate and adapt to their partner’s performance, tested with 3T fMRI. The data suggest (A) that pianists activate motor knowledge of the other’s actions in cortical and cerebellar motor regions, which (B) fine-tunes the detection of temporal discrepancies between duo partners in the cerebellum, and (C) correlates with their readiness to adapt to the partner’s actions or not. Altogether, these results highlight the relevance of cortico-cerebellar loops for audio-motor integration during joint action extending the framework of ‘internal models’ from solo to ensemble performance.

More about the speaker

Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics Frankfurt/Main, Germany.

Massimo Lumaca

Precision-weighting of auditory prediction errors: An empirical support from two musical studies

Perceiving and keeping track of auditory changes in the external environment is a core process in crucial daily-life abilities, from noticing threats in the world to the perception and appreciation of music. According to the theory of predictive coding (PC), this process entails a continuous optimisation of an internal model of the sound environment through unpredicted events, instantiated in the brain by “neural prediction error signals”. An accurate internal model can generate more precise predictions of the upcoming sensory input, promoting a more rapid reaction to associated environmental changes. A core assumption of PC is that the brain weights neural prediction errors by their reliability: in noisy (or perceptually complex) environments, prediction errors are less reliable and will be down-weighted, thus contributing less to an internal model’s revision. This process is thought to be implemented by changes in the synaptic gain (or sensitivity) of superficial pyramidal cells of the auditory cortex. In this talk, I will present two works, one EEG experiment with rhythmic patterns (Lumaca et al., 2019) and one fMRI experiment with melodic patterns (Lumaca, Dietz, et al., 2020), that empirically support the precision-weighting hypothesis in the domain of music.

Pablo Ripollés

The role of dopaminergic and reward-related circuits in language learning and music memory

In a series of behavioral and neuroimaging experiments, we showed that humans—even in absence of explicit feedback—can experience pleasure from language-learning itself. Specifically, new-word learning from context (i.e., reading), triggered an intrinsic reward signal that modulated long-term memory for newly-learned words via the activation of the dopaminergic midbrain. Using a pharmacological intervention, we recently showed that dopamine plays indeed a causal role in this process. We now extend these results to the music domain, showing that long term memory for newly-learned songs depends on the rewarding value of the song itself and on dopaminergic transmission.

More about the speaker

The Ripolles Lab in the Music and Audio Research Laboratory, Department of Psychology, New York University, USA.