The Rapid-Attention, Back and Forth, and Communication (Rapid ABC) assessment is a semi-structured play interaction during which an examiner engages a child in five activities intended to elicit social-communication behaviors and turn taking. The examiner scores the frequency and quality of the child’s social behavior in each activity, generating a total score that reflects the child’s social engagement with her during the assessment. The standard Rapid ABC dataset contains a daunting amount of detail. We have produced a static version that captures the action-reaction dynamic of the assessment as frames. We have conducted a user study on our dataset to see if subjects can predict the engagement of a child in the video. We presented subjects both frames from our staticMMDB dataset and the full video of the original MMDB dataset and found little difference in their performance. In this paper we show that computer vision methods can predict children’s engagement. We automatically identify the ease-of-engagement of a child and provide evaluation baselines for the task.
[1]
James M. Rehg,et al.
Decoding Children's Social Behavior
,
2013,
2013 IEEE Conference on Computer Vision and Pattern Recognition.
[2]
Agata Rozga,et al.
Joint Alignment and Modeling of Correlated Behavior Streams
,
2013,
2013 IEEE International Conference on Computer Vision Workshops.
[3]
Rahul Gupta,et al.
Analysis of engagement behavior in children during dyadic interactions using prosodic cues
,
2016,
Comput. Speech Lang..
[4]
Andrew Zisserman,et al.
Deep Face Recognition
,
2015,
BMVC.
[5]
Mahadev Satyanarayanan,et al.
OpenFace: A general-purpose face recognition library with mobile applications
,
2016
.