Modality-Specific Effects on Language Processing in Children with Developmental Language Disorder

Project Participants

Project Description

Language processing often requires a combination of auditory, visual, vocal, and manual information processing. That is, language can be perceived via the auditory or visual input modality and produced via the vocal or manual output modality. Successful communication usually consists of different input-output modality combinations. Research about modality-specific effects has shown that the way how input and output modalities are combined is essential for language processing, especially when it comes to modality switching. Specifically, switching between incompatible modality combinations (i.e., auditory-manual and visual-vocal) needs more time and is more error-prone than switching between compatible combinations (i.e., auditory-vocal and visual-manual). Auditory-vocal and visual-manual combinations are defined as compatible because the modality of sensory input corresponds to the modality of the sensory feedback of motor output. So far, however, knowledge about the role of modality compatibility in language processing is restricted to adults without language processing deficits.

The aim of the planned project is to investigate the role of modality compatibility in children as well as its role in the context of Developmental Language Disorders (DLD). Modality processing of children with DLD differs from typically developing children. On the one hand, they produce more frequently gestured utterances while speaking in order to compensate for their language deficits. On the other hand, they show greater difficulties in the integration of gestured information into spoken language. The role of modality compatibility regarding these differences is still unknown. In the planned project, we will systematically investigate modality switching of 108 preschool children with and without DLD. Children will be tested in four sessions, containing standardized language assessment tools as well as different computerized, game-based tasks. In the computerized tasks, children will be instructed to switch between compatible and incompatible modality combinations. Input and output will differ systematically between sessions regarding input and output type (e.g. visual input in terms of pictures versus gestures or manual output in terms of keypresses versus gestures). We expect to find generally stronger modality-compatibility effects for typically developing children compared to children with DLD. Moreover, we assume stronger effects for processing more language-specific input and output (e.g. words or gestures) compared to less language-specific processing (e.g. pictures or sounds).

Altogether, the proposed work will shed light on one of the most frequent developmental disorders from a completely new, multimodal perspective. It might yield new insights into the aetiology of DLD, might contribute to new therapeutic interventions and, moreover, it might reveal important knowledge for the development of new multimodal linguistic theories.