Engaging with the Scenario: Affect and Facial Patterns from a Scenario-Based Intelligent Tutoring System

Facial expression trackers output measures for facial action units (AUs), and are increasingly being used in learning technologies. In this paper, we compile patterns of AUs seen in related work as well as use factor analysis to search for categories implicit in our corpus. Although there was some overlap between the factors in our data and previous work, we also identified factors seen in the broader literature but not previously reported in the context of learning environments. In a correlational analysis, we found evidence for relationships between factors and self-reported traits such as academic effort, study habits, and interest in the subject. In addition, we saw differences in average levels of factors between a video watching activity, and a decision making activity. However, in this analysis, we were not able to isolate any facial expressions having a significant positive or negative relationship with either learning gain, or performance once question difficulty and related factors were also considered. Given the overall low levels of facial affect in the corpus, further research will explore different populations and learning tasks to test the possible hypothesis that learners may have been in a pattern of “Over-Flow” in which they were engaged with the system, but not deeply thinking about the content or their errors.

[1]  Javier R. Movellan,et al.  The Faces of Engagement: Automatic Recognition of Student Engagementfrom Facial Expressions , 2014, IEEE Transactions on Affective Computing.

[2]  Kristy Elizabeth Boyer,et al.  The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring , 2014, ICMI.

[3]  A. Graesser,et al.  Dynamics of affective states during complex learning , 2012 .

[4]  Henrik Singmann,et al.  afex – Analysis of Factorial EXperiments , 2015 .

[5]  Scotty D. Craig,et al.  Integrating Affect Sensors in an Intelligent Tutoring System , 2004 .

[6]  D. Keltner Signs of appeasement: evidence for the distinct displays of embarrassment, amusement, and shame , 1995 .

[7]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[8]  Matthew Jensen Hays,et al.  Can Role-Play with Virtual Humans Teach Interpersonal Skills? , 2012 .

[9]  A. Graesser,et al.  Confusion can be beneficial for learning. , 2014 .

[10]  Brandon G. King,et al.  Facial Features for Affective State Detection in Learning Environments , 2007 .

[11]  Matthew S. Goodwin,et al.  Automated Detection of Facial Expressions during Computer-Assisted Instruction in Individuals on the Autism Spectrum , 2017, CHI.

[12]  Kallirroi Georgila,et al.  Learning, Adaptive Support, Student Traits, and Engagement in Scenario-Based Learning , 2016 .

[13]  Earl Woodruff,et al.  Person-centered approach to explore learner's emotionality in learning within a 3D narrative game , 2017, LAK.

[14]  Ryan Shaun Joazeiro de Baker,et al.  Using Video to Automatically Detect Learner Affect in Computer-Enabled Classrooms , 2016, TIIS.

[15]  Удк,et al.  ‘ Unmasking the Face : A Guide to Recognizing Emotions from Facial Clues , 2018 .

[16]  Louis-Philippe Morency,et al.  MultiSense—Context-Aware Nonverbal Behavior Analysis Framework: A Psychological Distress Use Case , 2017, IEEE Transactions on Affective Computing.

[17]  Jesper Juul Fear of Failing ? The Many Meanings of Difficulty in Video Games , 2011 .

[18]  Arthur C. Graesser,et al.  Better to be frustrated than bored: The incidence, persistence, and impact of learners' cognitive-affective states during interactions with three different computer-based learning environments , 2010, Int. J. Hum. Comput. Stud..

[19]  Arthur C. Graesser,et al.  Predicting Affective States expressed through an Emote-Aloud Procedure from AutoTutor's Mixed-Initiative Dialogue , 2006, Int. J. Artif. Intell. Educ..

[20]  Kristy Elizabeth Boyer,et al.  Predicting Learning from Student Affective Response to Tutor Questions , 2016, ITS.

[21]  Philip J. Guo,et al.  How video production affects student engagement: an empirical study of MOOC videos , 2014, L@S.

[22]  Kristy Elizabeth Boyer,et al.  Predicting Learning and Affect from Multimodal Data Streams in Task-Oriented Tutorial Dialogue , 2014, EDM.

[23]  Arthur C. Graesser,et al.  Emote aloud during learning with AutoTutor: Applying the Facial Action Coding System to cognitive–affective states during learning , 2008 .

[24]  Gwen Littlewort,et al.  The computer expression recognition toolbox (CERT) , 2011, Face and Gesture 2011.

[25]  P. Ekman,et al.  Facial action coding system , 2019 .

[26]  Jonathan P. Rowe,et al.  Integrating Learning, Problem Solving, and Engagement in Narrative-Centered Learning Environments , 2011, Int. J. Artif. Intell. Educ..

[27]  Kristy Elizabeth Boyer,et al.  Automatically Recognizing Facial Expression: Predicting Engagement and Frustration , 2013, EDM.

[28]  Kallirroi Georgila,et al.  Analyzing Learner Affect in a Scenario-Based Intelligent Tutoring System , 2017, AIED.