Dr. Andy Lücking
Goethe University Frankfurt
Andy Lücking’s research contributes to a linguistic theory of human communication, that is, face-to-face interaction beyond single sentences. This involves the adaptation of dynamic dialogue semantics, the development of multimodal grammar extensions, occasionally the revision of traditional linguistic theories (e.g., quantification or pointing), the use of corpora and computational methods (as in the ViCom project GeMDiS), and taking an overarching cognitive perspective. Andy Lücking received a PhD in linguistics (Dr. phil.) in 2011 at Bielefeld University on iconicity and iconic gestures. He defended his habilitation in 2022 on “Aspects of multimodal communication” at the Laboratoire de Linguistique Formelle (LLF) at the Université Paris Cité.
- On the broader picture of multimodal communication:
Andy Lücking and Jonathan Ginzburg. “Leading voices: Dialogue semantics, cognitive science, and the polyphonic structure of multimodal interaction”. Language and Cognition , Volume 15 , Issue 1 , January 2023 , pp. 148–172, DOI: https://doi.org/10.1017/langcog.2022.30
- A dialogue- and gesture-friendly theory of quantification:
Andy Lücking and Jonathan Ginzburg. “Referential transparency as the proper treatment of quantification”. In: Semantics and Pragmatics 15, 4 (2022). doi: 10.3765/sp.15.4. (Early access: https://semprag.org/index.php/sp/article/view/sp.15.4)
- A condensed, computational piece on how to assign meanings to (some kinds of) co-verbal gestures and integrate them in grammar:
Andy Lücking. “Modeling Co-verbal Gesture Perception in Type Theory with Records”. In: Proceedings of the 2016 Federated Conference on Computer Science and Information Systems. Hrsg. von Maria Ganzha, Leszek Maciaszek und Marcin Paprzycki. Vol. 8. Annals of Computer Science and Information Systems. IEEE, Sep. 2016, pp. 383–392. doi: 10.15439/2016F83. (Available at: https://annals-csis.org/proceedings/2016/drp/83.html)
- Pointing gestures as search instructions:
Andy Lücking. “Witness-loaded and Witness-free Demonstratives”. In: Atypical Demonstratives. Syntax, Semantics and Pragmatics. Hrsg. von Marco Coniglio, Andrew Murphy, Eva Schlachter und Tonjes Veenstra. Linguistische Arbeiten 568. Berlin und Boston: De Gruyter, 2018, pp. 255–284. ISBN: 978-3-11-056029-9.(Preprint: https://www.researchgate.net/publication/303667514_Witness-loaded_and_Witness-free_Demonstratives)
Prof. Dr. Alexander Mehler
Goethe University Frankfurt
Alexander Mehler’s research interests include automatic analysis and synthesis of language and multimodal data in spoken and written communication. To this end, he studies multimodal and multiplex networks derived from social and communication networks using models of language evolution, machine learning, and complex network theory. He is particularly interested in measurement models that help overcome end-to-end learning and its issues. This involves models of multimodal semantics that focus on the explicit modelling of sign structures. Alexander Mehler develops and tests quantitative methods and machine learning models that merge with virtual and augmented reality. The goal of this research is to ground multimodal semantics based on human behaviour in virtual worlds.
- Mehler, A., Gleim, R., Gaitsch, R., Uslu, T. & Hemati, W. (2020, online). From Topic Networks to Distributed Cognitive Maps: Zipfian Topic Universes in the Area of Volunteered Geographic Information. Complexity, vol. 4, pp. 1-47. DOI: 10.1155/2020/4607025
- Mehler, A., Hemati, W., Welke, P., Konca, M. & Uslu, T. (2020, online). Multiple Texts as a Limiting Factor in Online Learning: Quantifying Dissimilarities of Knowledge Networks across Languages. Frontiers in Education (Digital Education), pp. 1-31. DOI: 10.3389/feduc.2020.562670
- Mehler A., Gleim, R., Lücking, A., Uslu, T., & Stegbauer, Ch. (2018). „On the Self-similarity of Wikipedia Talks: a Combined Discourse-analytical and Quantitative Approach“. In: Glottometrics 40, pp. 1–45
- Mehler, A., Lücking. A., & Abrami, G. (2014). „WikiNect: Image Schemata as a Basis of Gestural Writing for Kinetic Museum Wikis“. In: Universal Access in the Information Society, pp. 1–17. DOI: 10.1007/s10209-014-0386-8
- Mehler, A., Lücking, A. & Menke, P. (2012). „Assessing Cognitive Alignment in Different Types of Dialog by means of a Network Model“. In: Neural Networks 32, S. 159–164. DOI: 10.1016/j.neunet.2012.02.013
Dr. Alexander Henlein
Goethe University Frankfurt
Alexander Henlein is a PostDoc at the Text Technology Lab (TTLab) of the Professorship for Computational Humanities / Text Technology of Prof. Dr. Alexander Mehler at the Goethe University Frankfurt. His doctoral research focused on the analysis of spatial semantics in language models, the extraction of object habitats from images, and the development of a VR-based Text2Scene system. Based on this work experience, he would like to develop VR-assisted communication systems within the scope of this project and use the data thus generated to create novel multimodal models.
- A. Henlein, A. Gopinath, N. Krishnaswamy, A. Mehler, J. Pustejovsky, “Grounding Human-Object Interaction to Affordance Behavior in Multimodal Datasets”, in Frontiers in Artificial Intelligence-Language and Computation, 2023 (accepted).
- A. Henlein and A. Mehler, “What do Toothbrushes do in the Kitchen? How Transformers Think our World is Structured,” in Proceedings of the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022), 2022
- A. Henlein, G. Abrami, A. Kett, C. Spiekermann, and A. Mehler, “Digital Learning, Teaching and Collaboration in an Era of ubiquitous Quarantine,” in Remote Learning in Times of Pandemic – Issues, Implications and Best Practice, L. Daniela and A. Visvizin, Eds., Thames, Oxfordshire, England, UK: Routledge, 2021.
- A. Henlein, G. Abrami, A. Kett, and A. Mehler, “Transfer of ISOSpace into a 3D Environment for Annotations and Applications,” in Proceedings of the 16th Joint ACL – ISO Workshop on Interoperable Semantic Annotation, Marseille, 2020, pp. 32-35.
- A. Henlein and A. Mehler, “On the Influence of Coreference Resolution on Word Embeddings in Lexical-semantic Evaluation Tasks,” in Proceedings of The 12th Language Resources and Evaluation Conference, Marseille, France, 2020, pp. 27-33.
Both corpus-based linguistics and contemporary computational linguistics rely on the use of often large, linguistic resources. The expansion of the linguistic subject area to include visual means of communication such as gesticulation has not yet been backed up with corresponding corpora. This means that “multimodal linguistics” and dialogue theory cannot participate in established distributional methods of corpus linguistics and computational semantics. The main reason for this is the difficulty of collecting multimodal data in an appropriate way and at an appropriate scale. Using the latest VR-based recording methods, the GeMDiS project aims to close this data gap and to investigate visual communication by means of machine-based methods and innovative use of neuronal and active learning for small data using the systematic reference dimensions of associativity and contiguity of the features of visual and non-visual communicative signs. GeMDiS is characterised above all by the following characteristics:
- Ecological validity: the data collection takes place in dialogue situations and thus also takes a look at everyday gestures or interactive gestures in particular. In this respect, GeMDS differs from collections of partly emblematic hand shapes or gestural charades.
- True multimodality: the VR-based recording technology records not only hand-and- arm movements and handshapes but also facial expressions — it is this proper multimodality that is the hallmark of natural language interaction. In this, GeMDS already anticipates potential further developments of ViCom.
The corpus created in this way is made available to the research community (FAIR principles). The results of GeMDS feed into social human-machine interaction, contribute to research on gesture families, and provide a basis for exploratory corpus analysis and further annotation. Furthermore, the project investigates to what extent the results obtained can serve formal semantics for the input problem of meaning representation (in short: in order to compute a multimodal meaning compositionally, it is first of all necessary to associate the linguistic and the non-vocal parts of an utterance with meanings, something that so far only happens intuitively). In the last phase of the project, a VR avatar will be developed into a playback medium of the previously recorded multimodal behaviour. This serves as a visual evaluation of the methodology. The avatar can also be used as an experimental platform, e.g. in cooperation with other projects.