Project Participants


Prof. Dr. Stefan R. Schweinberger
(Principal Investigator)
University of Jena
stefan.schweinberger@uni-jena.de
Stefan Schweinberger is interested in cognition as well as cognitive and social neuroscience. Areas of his research include person perception and human interaction, with a particular focus on communication via the face and the voice. He received his PhD from the University of Konstanz in 1991, and worked as a professor at the Universities of Glasgow (2000-2005) and Jena (2005-present). He and his team use research methods linking brain and cognition/emotion, such as event-related brain potentials (ERP), eyetracking, or investigations of patients with focal brain lesions. His research also covers individual differences and constraints to person perception or communication, whether of sensory (e.g., hearing loss) or central origin (e.g., autism, prosopagnosia).
Selected publications
- Frühholz, S., & Schweinberger, S.R. (2021). Nonverbal auditory communication – Evidence for Integrated Neural Systems for Voice Signal Production and Perception. Progress in Neurobiology, 199, 101948. doi: 10.1016/j.pneurobio.2020.101948.
- Frühholz, S., & Schweinberger, S.R. (2021). Nonverbal auditory communication – Evidence for Integrated Neural Systems for Voice Signal Production and Perception. Progress in Neurobiology, 199, 101948. doi: 10.1016/j.pneurobio.2020.101948.
- Schweinberger, S.R., & von Eiff, C.I. (2022). Enhancing Socio-emotional Communication and QoL in Young CI Recipients: Perspectives from Parameter-specific Morphing and Caricaturing. Frontiers in Neuroscience, 16:956917. doi: 10.3389/fnins.2022.956917.
- Skuk, V.G., Kirchen, L., Oberhoffner, T., Guntinas-Lichius, O., Dobel, C., & Schweinberger, S.R. (2020). Parameter-specific Morphing Reveals Contributions of Timbre and F0 Cues to the Perception of Voice Gender and Age in Cochlear Implant Users. Journal of Speech, Language, and Hearing Research, 63(9), 3155-3175. doi: 10.1044/2020_JSLHR-20-00026
- Skuk, V.G., Kirchen, L., Oberhoffner, T., Guntinas-Lichius, O., Dobel, C., & Schweinberger, S.R. (2020). Parameter-specific Morphing Reveals Contributions of Timbre and F0 Cues to the Perception of Voice Gender and Age in Cochlear Implant Users. Journal of Speech, Language, and Hearing Research, 63(9), 3155-3175. doi: 10.1044/2020_JSLHR-20-00026


Prof. Dr. Christian Dobel
(Principal Investigator)
Jena University Hospital
christian.dobel@med.uni-jena.de
(Photo, a short introduction, and selected publications are to be updated.)


Prof. Dr. Volker Gast
(Principal Investigator)
University of Jena
volker.gast@uni-jena.de
(Photo, a short introduction, and selected publications are to be updated.)


Celina Isabelle von Eiff
(PhD Candidate)
University of Jena
celina.isabelle.von.eiff@uni-jena.de
Celina I. von Eiff is interested in cognitive and social neuroscience, particularly in voice and face perception in individuals with hearing prostheses (i.e., cochlear implants). She uses behavioural measures and EEG to investigate how the human brain integrates sensory information after cochlear implantation. She currently works at the Department of Psychology of the Friedrich Schiller University Jena.
Selected publications
- von Eiff, C. I., Frühholz, S., Korth, D., Guntinas-Lichius, O., & Schweinberger, S. R. (2022). Crossmodal Benefits to Vocal Emotion Perception in Cochlear Implant Users. iScience.
- Schweinberger, S. R. & von Eiff, C. I. (2022). Enhancing socio-emotional communication and quality of life in young cochlear implant recipients: Perspectives from parameter-specific morphing and caricaturing. Frontiers in Neuroscience, 16.
- von Eiff, C. I., Skuk, V. G., Zäske, R., Nussbaum, C., Frühholz, S., Feuer, U., Guntinas-Lichius, O., & Schweinberger, S. R. (2022). Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users. Ear and Hearing, 43(4), 1178-1188.
- Nussbaum, C., von Eiff, C. I., Skuk, V. G., & Schweinberger, S. R. (2022). Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency. Cognition, 219.
- Schweinberger, S. R., von Eiff, C. I., Kirchen, L., Oberhoffner, T., Guntinas-Lichius, O., Dobel, C., Nussbaum, C., Zäske, R., & Skuk, V. G. (2020). The Role of Stimulus Type and Social Signal for Voice Perception in Cochlear Implant Users: Response to the Letter by Meister et al. Journal of Speech, Language, and Hearing Research.
Project Description
The ability to communicate via auditory spoken language is taken as a benchmark for success of cochlear implants (CIs), but this disregards the important role visual cues play in communication. The relevance of socio-emotional signals and their importance for quality of life with a CI (Luo, Kern, & Pulling, 2018; Schorr, Roth, & Fox, 2009) calls for research on visual benefits to communication. Recruiting models of communication via the face and voice (Young, Frühholz, & Schweinberger, 2020), we consider that deafness can elicit crossmodal cortical plasticity, such that visual stimuli can activate auditory cortex areas. Even after adaptation to a CI, initial findings suggest a particularly strong contribution of visual information to the perception of speech and speaker gender. Better understanding of these phenomena at the functional and brain level is required to promote efficient interventions improving communication, and ultimately life quality. Here we focus on postlingually deaf adult CI users and propose four studies (S1-S4).
In S1, we conduct a systematic review to determine the current state of knowledge regarding the role of visual information (face or manual gesture) for emotion recognition and speech perception from voices, in hearing adults and CI users. In S2, we explore in a behavioral experiment with dynamic time-synchronized audiovisual stimuli whether CI users benefit more from congruent facial expressions when recognizing vocal emotions than do hearing adults – and whether this holds even when controlling for overall auditory-only performance levels. Importantly, we use voice morphing technology, rather than noise, to equate performance levels. In S3, we study brain correlates of audiovisual integration (AVI) in event-related potentials (ERPs) to audiovisual (AV) emotional stimuli. We focus on the ability of congruent AV stimuli to speed up neural processing, and investigate relationships between individual neural markers of AVI and behavioral performance in emotion recognition. In S4, we study the degree to which perceptual training with caricatured vocal emotions can improve auditory and audiovisual emotion recognition in adult CI users. We assess relationships between emotion recognition abilities and reported quality of life in all studies. The project builds on successful previous research funded by the DFG on Voice Perception (Schw 511/10-1, -2) and Audiovisual integration in the identification of speaker and speech (Schw 511/6-1, -2), and on our long-standing collaboration with the Cochlear Implant Rehabilitation Centre in Thuringia.
We hope this work will contribute to models of the cognitive and brain mechanisms underlying multimodal perception in human communication. We propose that better understanding of the mechanisms by which visual facial signals support CI users will provide important information that can be used to optimize both linguistic and socio-emotional communication, and ultimately quality of life.