Exploration and comparison of mobile high-density electroencephalographic recordings during interactive communication in virtual reality and real-world settings

Researchers involved: Alexander Mehler, Alexander Henlein and Andy Lücking (GeMDiS, Goethe University Frankfurt), Petra Schumacher (The Gesture-to-Sign Trajectory: Phonological Parameters in Production and Real-Time Comprehension, University of Cologne), and Ingmar Brilmayer and (external partner, University of Cologne)

The present external collaboration wants to bring together high-density, mobile electroencephalography (EEG) with virtual reality (VR) in an interactive setting. We will design an experiment that enables us to investigate reference tracking and its multimodal instantiations in an interactive communicative situation. In addition, we pursue a technical and a methodological goal. As a technical goal, we want to test the applicability of high-density, mobile EEG recordings in combination with VR in an interactive linguistic experiment. This includes basic challenges such as fitting the glasses and EEG caps, but also possible caveats during data preprocessing. Second, we will design an experiment that enables us to systematically evaluate and compare EEG responses to linguistic stimuli in multi-modal VR and real-world settings. Both goals can be understood as pilot or foundational work for further interactive, multi-modal experiments performed within the ViCom priority program that want to use VR, real-world settings or a combination of the two together with high-density EEG recordings.

A comparative study of VR and real-world settings

Our aim is to develop an experimental design that can be conducted in virtual reality and in a real-world setting. This study has two aims: First, we want to provide a comparative study of these two experimental paradigms (VR vs. real-world) in order to isolate possible differences in the electrophysiological response to (linguistic) stimuli. While recently there have been a few linguistic studies using virtual reality (Tromp et al., 2018; Zappa et al., 2019), to the best of our knowledge there exists no study that compares realworld and virtual reality setups. Specifically, we are interested in the effect of joint attention in referent selection tasks on the EEG response in individual participants during interaction in a virtual or real-world environment. In particular, we want to focus on differences in the effect of co-speech gestures (index gesture) and gaze following to establish joint attention to a real-world or VR referent object. A promising setup is for instance a Director Task set-up (Keysar et al. 1998) or a study in which two (or more) participants refurnish a room together. Although working out the details of this design will be part of the short-term collaboration, one idea is to work with deferred reference (e.g., pointing to a book while uttering “My favorite author.”) or ambiguous reference (e.g., “Oh look at that candle”, while two candles are visible) and how these referential difficulties are resolved using language, gaze or indexing gestures in VR and the real-world. Possible differences, for instance, might lie in the weight that participants assign to gaze, indexing and language in VR and the real-world. For example, participants might experience gaze or the index gesture of an avatar as less reliable than that of real, physically present participants, and thus might tend to rely on language more crucially than in a real-world setting.

VR and EEG

Virtual reality and EEG have been successfully combined before in language comprehension research (Tromp et al., 2018; Zappa et al., 2019; cf. Peeters, 2019). Here, we will test the combination of a 128- channel, fully mobile EEG-system (CGX-mobile 128, Cognionics, San Diego, California, USA) and a Meta Quest Pro VR-Headset. Specifically, since both systems are worn on the head, we want to find out the best way of wearing both systems with as little influence on the EEG recordings as possible. Since

EEG is very sensitive to movements or electrical activity, wearing VR-glasses on top of an EEG-cap potentially introduces movement related artifacts (i.e. electrode movements) or artifacts introduced by electrical inferences of the VR glasses. Based on the expertise with EEG recordings in real-world environments (Cologne), we want to test several setups in order to identify the most common sources of artifacts in combined VR/EEG recordings, and work out best practices on how they can be avoided or dealt with during data preprocessing and analysis.