Public speaking training with a multimodal interactive virtual audience framework

Abstract

We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using multimodal sensors, and multimodal feedback is produced by virtual characters and generic visual widgets depending on the user’s behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).

Publication
Proceedings of the 2015 ACM International Conference on Multimodal Interaction, ICMI 2015

Related