A multimodal corpus approach to the design of virtual recruiters

Abstract

This paper presents the analysis of the multimodal behavior of experienced practitioners of job interview coaching, and describes a methodology to specify their behavior in Embodied Conversational Agents acting as virtual recruiters displaying different interpersonal stances. In a first stage, we collect a corpus of videos of job interview enactments, and we detail the coding scheme used to encode multimodal behaviors and contextual information. From the annotations of the practitioners' behaviors we observe specificities of behavior across different levels, namely monomodal behavior variations, inter-modalities behavior influences, and contextual influences on behavior. Finally we propose the adaptation of an existing agent architecture to model these specificities in a virtual recruiter’s behavior.

Publication
Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013

Related