Robot lecture for enhancing presentation in lecture

In lectures with presentation slides such as an e-learning lecture on video, it is important for lecturers to control their non-verbal behavior involving gaze, gesture, and paralanguage. However, it is not so easy even for well-experienced lecturers to properly use non-verbal behavior in their lecture to promote learners’ understanding. This paper proposes robot lecture, in which a robot substitutes for human lecturers, and reconstructs their non-verbal behavior to enhance their lecture. Towards such reconstruction, we have designed a model of non-verbal behavior in lecture. This paper also demonstrates a robot lecture system that appropriately reproduces non-verbal behavior of human lecturers with reconstructed one. In addition, this paper reports a case study involving 36 participants with the system, whose purpose was to ascertain whether robot lecture with reconstruction could be more effective for controlling learners' attention and more beneficial for understanding the lecture contents than video lecture by human and robot lecture with simple reproduction. The results of the case study with the system suggest the effect of promoting learners’ understanding of lecture contents, the necessity of reconstructing non-verbal behavior, and the validity of the non-verbal behavior model.

[1]  Willem J. M. Levelt,et al.  Gesture and the communicative intention of the speaker , 2005 .

[2]  Brian Scassellati,et al.  Social robots for education: A review , 2018, Science Robotics.

[3]  J. Collins,et al.  Education techniques for lifelong learning: giving a PowerPoint presentation: the art of communicating effectively. , 2004, Radiographics : a review publication of the Radiological Society of North America, Inc.

[4]  Takahiro Tanaka,et al.  Driver Agent for Encouraging Safe Driving Behavior for the Elderly , 2017, HAI.

[5]  M. Alibali,et al.  Gesture's role in speaking, learning, and creating language. , 2013, Annual review of psychology.

[6]  Toyoaki Nishida,et al.  Converting Text into Agent Animations: Assigning Gestures to Text , 2004, HLT-NAACL.

[7]  Tatsuo Arai,et al.  Nonverbal behaviors toward an audience and a screen for a presentation by a humanoid robot , 2014, Artif. Intell. Res..

[8]  Allison Sauppé,et al.  Robot Deictics: How Gesture and Context Shape Referential Communication , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Scotty D. Craig,et al.  Student Perceptions: The Test of Spatial Contiguity and Gestures for Robot Instructors , 2017, HRI.

[10]  Catherine Pelachaud,et al.  Generating Co-speech Gestures for the Humanoid Robot NAO through BML , 2011, Gesture Workshop.

[11]  Bilge Mutlu,et al.  Multivariate evaluation of interactive robot systems , 2014, Auton. Robots.

[12]  Mitsuhiro Goto,et al.  A Robot for Reconstructing Presentation Behavior in Lecture , 2018, HAI.

[13]  Rudolf Arnheim,et al.  Hand and Mind: What Gestures Reveal About Thought by David McNeill (review) , 2017 .

[14]  Mitsuhiro Goto,et al.  Understanding Presentation Document with Visualization of Connections between Presentation Slides , 2016, KES.