Dr. Cornelia Loos
University of Hamburg
Cornelia Loos received her Ph.D. in Linguistics from the University of Texas at Austin in 2017. After completing a two-year postdoc project on aspects of the syntax and semantics of German Sign Language (DGS) in Göttingen as well as a second post-doc at the university of Hamburg working in the DGS-corpus project, she is currently a visiting professor at the Institute of German Sign Language and Communication of the Deaf in Hamburg. Her research interests cluster around the syntax-semantics interface, focusing on lexical semantics as well as experimental semantics and pragmatics. She works predominantly on signed languages and has investigated topics such as word class and sentencehood in American and German Sign Language (DGS), the syntax and semantics of resultative constructions in these two languages, the influence of iconicity on the semantics of taboo language in DGS, and, most recently, on response elements and NPIs in DGS.
- Loos, C., German, A., & Meier, R. P. (2022). Simultaneous structures in sign languages: Acquisition and emergence. Frontiers in Psychology, 7232.https://doi.org/10.3389/fpsyg.2022.992589
- Loos, C. 2022. Sizing up adjectives: Delimiting the adjective class in American Sign Language. Sign Language & Linguistics.
- Loos, C. and D.J. Napoli. 2021. Expanding Echo: Coordinated Head Articulations as Nonmanual Enhancements in Sign Language Phonology. Cognitive Science 45(5). doi.org/10.1111/cogs.12958.
- Loos, C. 2020. Quite a mouthful: Comparing speech act verbs in Nederlandse Gebarentaal and American Sign Language. Linguistische Berichte.
- Loos, C., M. Steinbach and S. Repp. 2020. Affirming and rejecting assertions in German Sign Language. Sinn und Bedeutung 24 Proceedings.
While sign languages exhibit parallels to spoken languages in all key areas of linguistic description (Sandler & Lillo-Martin, 2006), they also have characteristics unique to the visual-gestural modality. The current project builds on one such characteristic, namely the availability of two paired manual articulators, which can move independently (in addition to various non-manual articulators). The coordination of the two hands allows, among others, encoding information about the simultaneous occurrence of two events. In those cases, each hand represents one event participant, and the simultaneous presence of both hands represents the parallel existence of these referents in time and space (Perniss, 2007). Simultaneous encoding of information is a much-researched hallmark of sign languages (Vermeerbergen et al., 2007), yet potential limitations on simultaneity have been explored to a much lesser extent.
This project researches both linguistic and non-linguistic constraints on the simultaneous encoding of complex events in which two subevents occur at the same time. Using data from German Sign Language (DGS), I look at the expression of events that involve an externally caused change of state. An example of such an event might be Mary hammered the spoon flat, since Mary’s hammering caused a change in the degree of flatness of the spoon. Hammering and becoming flatter go ‘hand in hand’ here, a fact that a visual-gestural language can potentially express iconically. For example, one hand might represent hammering while the other depicts the spoon flattening over time. Since depictions of the visual properties of an object are typically found in classifier constructions (CCs) in DGS and other sign languages, CCs form the focus of this project. In most recent research, they are analyzed as containing both linguistic and gestural components (e.g., Davidson, 2015). One important contribution of this project is therefore an empirically grounded analysis of the linguistic vs. gestural properties of CCs in DGS. I will conduct a series of corpus and experimental studies (production and acceptability) that demonstrate how iconic motivation, physiological and phonological constraints, as well as discourse-pragmatic factors interact to influence the simultaneous expression of externally caused changes of state in DGS vs. Silent Gesture. The aims are twofold: On the one hand, this project seeks a better understanding of how much simultaneity visual-gestural languages such as DGS allow, and on the other hand, it aims to develop a model of simultaneous CCs.