Audiovisual Perception of Emotion and Speech in Hearing Individuals and Cochlear Implant Users

Project Participants

Project Description

The ability to communicate via auditory spoken language is taken as a benchmark for success of cochlear implants (CIs), but this disregards the important role visual cues play in communication. The relevance of socio-emotional signals and their importance for quality of life with a CI (Luo, Kern, & Pulling, 2018; Schorr, Roth, & Fox, 2009) calls for research on visual benefits to communication. Recruiting models of communication via the face and voice (Young, Frühholz, & Schweinberger, 2020), we consider that deafness can elicit crossmodal cortical plasticity, such that visual stimuli can activate auditory cortex areas. Even after adaptation to a CI, initial findings suggest a particularly strong contribution of visual information to the perception of speech and speaker gender. Better understanding of these phenomena at the functional and brain level is required to promote efficient interventions improving communication, and ultimately life quality. Here we focus on postlingually deaf adult CI users and propose four studies (S1-S4).

In S1, we conduct a systematic review to determine the current state of knowledge regarding the role of visual information (face or manual gesture) for emotion recognition and speech perception from voices, in hearing adults and CI users. In S2, we explore in a behavioral experiment with dynamic time-synchronized audiovisual stimuli whether CI users benefit more from congruent facial expressions when recognizing vocal emotions than do hearing adults – and whether this holds even when controlling for overall auditory-only performance levels. Importantly, we use voice morphing technology, rather than noise, to equate performance levels. In S3, we study brain correlates of audiovisual integration (AVI) in event-related potentials (ERPs) to audiovisual (AV) emotional stimuli. We focus on the ability of congruent AV stimuli to speed up neural processing, and investigate relationships between individual neural markers of AVI and behavioral performance in emotion recognition. In S4, we study the degree to which perceptual training with caricatured vocal emotions can improve auditory and audiovisual emotion recognition in adult CI users. We assess relationships between emotion recognition abilities and reported quality of life in all studies. The project builds on successful previous research funded by the DFG on Voice Perception (Schw 511/10-1, -2) and Audiovisual integration in the identification of speaker and speech (Schw 511/6-1, -2), and on our long-standing collaboration with the Cochlear Implant Rehabilitation Centre in Thuringia.

We hope this work will contribute to models of the cognitive and brain mechanisms underlying multimodal perception in human communication. We propose that better understanding of the mechanisms by which visual facial signals support CI users will provide important information that can be used to optimize both linguistic and socio-emotional communication, and ultimately quality of life.