Reconstructing multiparty conversation field by augmenting human head motions via dynamic displays

A novel system is presented for reconstructing multiparty face-to-face conversation scenes in the real world through the use of dynamic displays that augment human head motion. This system aims to display and playback recorded conversations as if the remote people were talking in front of the viewer. It consists of multiple projectors and transparent screens attached to actuators. The screens displaying the life-size faces are spatially arranged to recreate the actual scene. Screen pose is dynamically synchronized to the actual head motions of the participants to emulate their head motions, which typically indicate shifts in visual attention. Our hypothesis is that physical screen motion with image motion can boost the viewer's understanding of others' visual attention. Experiments suggest that viewers can more clearly discern the attention of meeting participants, and more accurately identify the addressees.