Researchers involved: Chiara Zulberti, Katja Liebal and Federica Amici (Compositional structures in chimpanzee gestural communication, University of Leipzig), Jana Bressem (external partner, TU Chemnitz), and Silva Ladewig (StabiGest, University of Göttingen)
Theoretical background
Research on gestural communication in non-human animals has traditionally relied on top-down approaches for the identification of different gesture types, mainly focusing on their function (e.g., begging rather than extended-arm gesture) to establish categories. The assignment of gesture units to predefined categories, however, may be problematic for two main reasons. First, top-down classifications are mostly based on researchers’ intuition and on the interpretation of the recipient’s response, and thus largely subjective (Amici et al., 2022). For instance, gestures that occur frequently and/or in multiple contexts are more likely to be assigned to narrow, specific categories, rather than being considered variations of few broader gestural types, raising concerns on the accuracy of these classifications (Hobaiter and Byrne, 2017). In chimpanzees, the size of the gestural repertoire varies from more than 100 gesture types (Roberts et al., 2014) to less than 30 (Pika et al., 2005), depending on how fine-grained categories and the corresponding coding schemes are. Second, individuals may often produce non-prototypical gestures, and such variation may go undetected with top-down approaches (Amici et al., 2022). In chimpanzees, for example, there is important variation in the way gestures are instantiated through development (Bard et al., 2019), so that some gestures may not fit into specific categories.
In this project, we aim to address these limitations by implementing a novel bottom-up form-based approach to reliably identify different gesture types in chimpanzee communication. This method will allow us to systematically assign gesture units to different categories based on their formal features, reducing the subjective bias of top-down classifications. Specifically, the hereby proposed collaboration has two main objectives:
- to develop a coding scheme that describes salient aspects and criteria of gestural forms in chimpanzees
- to apply this form-based coding scheme to the identification of chimpanzee gesture types
This novel approach will then be implemented, within our main ViCom project on “Compositional Structures in Chimpanzee Gestural Communication”, for the categorization of gestural units. Given that units are the building blocks of compositional structures, applying this methodology to the identification of gestural units will represent a crucial first step to investigate whether they can form longer sequences with novel meanings. Moreover, this approach will foster a comparative understanding of gestural communication, by developing and testing a form-based tool for the categorizations of gesture types in species other than humans. Thus, this collaboration will contribute not only to our main project, but also to the general ViCom scope of action, by creating a novel interdisciplinary tool that can be further applied in other fields of investigation, and by promoting interdisciplinary discourse and homogenization of research tools across research areas.
Methods
To develop a coding scheme for gestural forms in chimpanzees (first objective), we will adapt methods that are currently used for the study of human co-speech gestures. Silva Ladewig and Jana Bressem will be hosted three days in Leipzig to train Chiara Zulberti, Federica Amici and Katja Liebal, as well as master students working on the project on chimpanzee compositionality, in the use of form-based coding systems. We will discuss how to adapt these coding systems to chimpanzees based on our knowledge of inter-specific differences in morphology and behaviour, and we will practically implement them in ELAN.
We will apply this coding scheme for the identification of chimpanzee gesture types (second objective), by using ELAN to code videos of chimpanzee gesture. We will first detect gesture units and then code multiple aspects of their formal features. Recurrent online meetings among all collaborators will allow to promptly identify problems during the coding, and possible solutions. Through cluster analyses, the presence/absence of these different features in each gesture unit will then allow us to identify “gesture unit groups” based on small distances between cluster members along these multidimensional features. A hierarchical hard clustering approach will then allow us to identify the optimal number of clusters and define gesture types based on the recurrent cooccurrence of multiple formal features in gesture units.
