Exploring the Limits of Simultaneity: Encoding Caused Change-of-state Events with Classifier Constructions in German Sign Language (DGS)

Project Participants

Project Description

While sign languages exhibit parallels to spoken languages in all key areas of linguistic description (Sandler & Lillo-Martin, 2006), they also have characteristics unique to the visual-gestural modality. The current project builds on one such characteristic, namely the availability of two paired manual articulators, which can move independently (in addition to various non-manual articulators). The coordination of the two hands allows, among others, encoding information about the simultaneous occurrence of two events. In those cases, each hand represents one event participant, and the simultaneous presence of both hands represents the parallel existence of these referents in time and space (Perniss, 2007). Simultaneous encoding of information is a much-researched hallmark of sign languages (Vermeerbergen et al., 2007), yet potential limitations on simultaneity have been explored to a much lesser extent.

This project researches both linguistic and non-linguistic constraints on the simultaneous encoding of complex events in which two subevents occur at the same time. Using data from German Sign Language (DGS), I look at the expression of events that involve an externally caused change of state. An example of such an event might be Mary hammered the spoon flat, since Mary’s hammering caused a change in the degree of flatness of the spoon. Hammering and becoming flatter go ‘hand in hand’ here, a fact that a visual-gestural language can potentially express iconically. For example, one hand might represent hammering while the other depicts the spoon flattening over time. Since depictions of the visual properties of an object are typically found in classifier constructions (CCs) in DGS and other sign languages, CCs form the focus of this project. In most recent research, they are analyzed as containing both linguistic and gestural components (e.g., Davidson, 2015). One important contribution of this project is therefore an empirically grounded analysis of the linguistic vs. gestural properties of CCs in DGS. I will conduct a series of corpus and experimental studies (production and acceptability) that demonstrate how iconic motivation, physiological and phonological constraints, as well as discourse-pragmatic factors interact to influence the simultaneous expression of externally caused changes of state in DGS vs. Silent Gesture. The aims are twofold: On the one hand, this project seeks a better understanding of how much simultaneity visual-gestural languages such as DGS allow, and on the other hand, it aims to develop a model of simultaneous CCs.