Testing the correlation between top-down prosodic annotation systems and bottom-up automatic annotation – Follow up

This is a follow up to the short-term collaboration Testing the correlation between top-down prosodic annotation systems and bottom-up automatic annotation.

This next phase aims to extend our investigation into the realm of visual prosody, integrating it with our existing automatic acoustic analysis pipeline.

Building on the success of our initial phase, where we automatically identified correlates of different perceived prominence annotations in the acoustics, we aim to extend our research to include visual prosody. This involves the use of ZAS equipment to extract markers from videos using OpenPose, a crucial step towards creating a comprehensive analysis pipeline that integrates both acoustic and visual data. Our work has already garnered attention, evidenced by our presentation at the DGfS 2024 in Bochum. We also plan to submit an abstract to the MMSYM 2024 in Frankfurt.