IPrA Conference 2023, July 9–14, in Brussels
Call for Papers
Deadline for submission: November 8, 2022
Multimodal and prosodic markers of information structure and
Organized by: Pilar Prieto & Frank Kügler
Human communication is a multimodal system in the sense that the sounds, words and utterances are usually accompanied by prosodic and body signals such as co-speech gestures (e.g., McNeill 1992). Co-speech gestures have been defined as visible communicative actions made by bodily movements (hand movements, head movements, among others) that co-occur with speech (e.g., Kendon 2004; see Wagner et al. 2014 for a review). Prosodic signals comprise word- and utterance-level elements such as tone, stress, pitch accents, or boundary tones that are key in signalling prosodic structure and information structure (e.g., Gussenhoven & Chen 2020 for a review).
Spoken language has generally been investigated as a unimodal phenomenon and more work is needed in the assessment of how visual and prosodic cues jointly contribute to the construction of meaning in discourse. From a multimodal perspective, it is well-established that manual cospeech gestures are strongly connected to speech from three different perspectives, namely semantic, pragmatic, and phonological (Kendon 1980; McNeill 1992). Indeed, according to McNeill’s (1992) three “synchrony rules”, gestures are co-expressive with the semantic and pragmatic meaning expressed in speech, and prominent phases of co-speech gesture (i.e., the stroke) occur just before or simultaneously with pitch accented (or prosodically prominent) syllables in speech. Even though work in the last decades has highlighted how prosodic and gestural features of language contribute in a systematic way to the marking of discursive and interactional functions in discourse (e.g., Kendon 1995, 2004, Swerts & Krahmer 2021, Brown & Prieto 2021, Debrelioska & Gullberg 2020, among others), more precise work is needed as to know in which ways they jointly signal information and discourse structure across languages and thus help construct discourse meaning
This panel will discuss a variety of work from different labs that are currently assessing the multimodal encoding of information structure (Baumann, Debrelioska, Perniss, our labs), or discourse structure (Ambrazaitis, Gullberg, House, Swerts, Zellers) from different perspectives and in different languages. We welcome abstracts that deal with the assessment of how multimodal markers encode information structure, sentence types, discourse structure and/or speech acts, both in signed and non-signed languages. We hope that the panel will contribute to advance our knowledge on how multimodal markers help encode discourse meaning, and more generally help forging a paradigm in which pragmatic theories are based on language as a multimodal phenomenon.
Upon abstract submission to the IPrA website, please indicate the panel your submission is intended for. Abstracts should be 250-500 words long. The deadline for abstract submission is November 8, 2022.
Further information is provided by the conference guidelines for submission: