Interactive Sensory Program for Affective Learning (InSPAL): An Innovative Learning Program Combining Interactive Media and Virtual Reality for Severely Intellectually Disabled Students

While special educational and training programs have been developed specifically for severely intellectually disabled (SID) students; little research has been carried out that employs the latest advances in virtual reality (VR) technology and 3D motion recognition for this population of students. In this study we focus on the development of a unique psycho-educational program called Interactive Sensory Program for Affective Learning (InSPAL) that exploits natural interface and virtual reality technologies together with pedagogically designed VR learning scenarios to enhance the pre-learning skills of SID students. The InSPAL program offers to SID students an environment in which to actively interact with the virtual learning scenarios, communicate in an alternative way, and develop a sense of mastery enhancing their learning potential. This paper will highlight the learning objectives, the instructional design and training flow for two of the learning domains of the InSPAL program. Our preliminary observations show that the SID students demonstrated an ability to interact with the virtual learning scenarios and many were able to communicate by raising their hands post training.

[1]  Cynthia Breazeal,et al.  Affective Learning — A Manifesto , 2004 .

[2]  Toby Sharp,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR.

[3]  Nigel Foreman,et al.  Transfer of spatial information from a virtual to a real environment in physically disabled children , 1996 .

[4]  Lizhuang Ma,et al.  Motion normalization: the preprocess of motion data , 2005, VRST '05.

[5]  J. McComas,et al.  Current uses of virtual reality for children with disabilities. , 1998, Studies in health technology and informatics.

[6]  Ron Chi-Wai Kwok,et al.  Smart Ambience for Affective Learning (SAMAL): Instructional design and evaluation , 2010 .

[7]  Andrew W. Fitzgibbon,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR 2011.

[8]  Sigal Eden,et al.  Three-dimensions vs. two-dimensions intervention programs: the effect on the mediation level and behavioural aspects of children with intellectual disability , 2011 .

[9]  Yufang Cheng,et al.  Improving social understanding of individuals of intellectual and developmental disabilities through a 3D-facail expression intervention program. , 2010, Research in developmental disabilities.

[10]  Jernej Barbic,et al.  Segmenting Motion Capture Data into Distinct Behaviors , 2004, Graphics Interface.

[11]  Corey J. Bohil,et al.  Virtual reality in neuroscience research and therapy , 2011, Nature Reviews Neuroscience.

[12]  Thad Starner,et al.  American sign language recognition with the kinect , 2011, ICMI '11.