ViCom Data Network (new draft)

The ViCom Data Network aims to foster (interdisciplinary) collaboration within ViCom, promote (reuse of) ViCom research outside of ViCom, as well as increase, sustain, and archive scientific returns of invested ViCom resources.

Data from ViCom projects

On the FLExibility and Stability of gesture-speecH coordination (FLESH):
Evidence from production, comprehension, and imitation




Original data type:

Aleksandra Ćwiek (ZAS), Šárka Kadavá (ZAS), Susanne Fuchs (ZAS), Wim Pouw (Donders & MPI)

(1) Production experiment of Polish native speakers producing counting-out rhymes

N = 11 (female = 8, male = 3), mean age = 24.1, all right-handed, no self-reported speech, language, or hearing disorders

Motion data: C3D files (200 MB), acoustic data: WAV files (560 MB)

Useful tools shared by ViCom members

Participants-to-items-videos script
For those cases where analysing experimental video data per item instead of per participant is desired, this script can automatically rearrange videos for you.”
(Door Spruijt)

Envision Toolbox

The envision toolbox contains coding modules and lectures that are tailored for the study of multimodal behavior and communication. The modules are intended as pedagogical examples to get researchers started on a particular mode of inquiry.
(Cwiek, A., De Melo, G., Edelman, J., Owoyele, B., Pouw, W., Santuber, J. Trujillo, J.. Envision Toolbox: Multimodal (Signal) Processing and Analysis in Communication.

“This python notebook runs you through the procedure of taking videos as inputs with a single person in the video, and outputting the 1) a masked video with facial, hand, and arm kinematics ovelayen, and 2) outputs the kinematic timeseries.”
(Owoyele, B., Trujillo, J., De Melo, G., Pouw, W. (2022). Masked-Piper: Masking personal identities in visual recordings while preserving multimodal information. SoftwareX. doi: 

Other multimodal data science tools

Red Hen Anonymizer
Tool to deidentify video and audio from recordings


A toolset for 3D reconstruction of multiple body poses from multi view video cameras

Segment Anything Model (SAM)
“A new AI model from Meta AI that can “cut out” any object, in any image, with a single click” (