Dozing Off or Thinking Hard?: Classifying Multi-dimensional Attentional States in the Classroom from Video

In this paper, we extract features of head pose, eye gaze, and facial expressions from video to estimate individual learners' attentional states in a classroom setting. We concentrate on the analysis of different definitions for a student's attention and show that available generic video processing components and a single video camera are sufficient to estimate the attentional state.

[1]  Pierre Dillenbourg,et al.  Translating Head Motion into Attention - Towards Processing of Student's Body-Language , 2015, EDM.

[2]  Joseph H. Goldberg,et al.  Identifying fixations and saccades in eye-tracking protocols , 2000, ETRA.

[3]  Stefan Haufe,et al.  On the interpretation of weight vectors of linear models in multivariate neuroimaging , 2014, NeuroImage.

[4]  Thomas S. Huang,et al.  A system for monitoring the engagement of remote online students using eye gaze estimation , 2014, 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW).

[5]  Judy Kay,et al.  MTFeedback: Providing Notifications to Enhance Teacher Awareness of Small Group Work in the Classroom , 2015, IEEE Transactions on Learning Technologies.

[6]  Peter Robinson,et al.  OpenFace: An open source facial behavior analysis toolkit , 2016, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV).

[7]  Niels Henze,et al.  EngageMeter: A System for Implicit Audience Engagement Sensing Using Electroencephalography , 2017, CHI.

[8]  Jonathan Gratch,et al.  Refactoring facial expressions: An automatic analysis of natural occurring facial expressions in iterative social dilemma , 2017, 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII).

[9]  Nicola C. Anderson,et al.  Everyday Attention: Variation in Mind Wandering and Memory in a Lecture , 2012 .

[10]  P. Ekman,et al.  Facial action coding system , 2019 .

[11]  Rafael A. Calvo,et al.  Automated Detection of Engagement Using Video-Based Estimation of Facial Expressions and Heart Rate , 2017, IEEE Transactions on Affective Computing.

[12]  Tanja Schultz,et al.  Starring into the void?: Classifying Internal vs. External Attention from EEG , 2016, NordiCHI.

[13]  Arvid Kappas,et al.  Mixing Implicit and Explicit Probes: Finding a Ground Truth for Engagement in Social Human-Robot Interactions , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[14]  Johannes Schöning,et al.  Augmenting Social Interactions: Realtime Behavioural Feedback using Social Signal Processing Techniques , 2015, CHI.

[15]  J. Smallwood,et al.  The science of mind wandering: empirically navigating the stream of consciousness. , 2015, Annual review of psychology.

[16]  Andrej Kosir,et al.  Predicting students’ attention in the classroom from Kinect facial and body features , 2017, EURASIP J. Image Video Process..

[17]  Mathias Benedek,et al.  Are you with me? Probing the human capacity to recognize external/internal attention in others’ faces , 2018, Visual cognition.

[18]  Javier R. Movellan,et al.  The Faces of Engagement: Automatic Recognition of Student Engagementfrom Facial Expressions , 2014, IEEE Transactions on Affective Computing.