An open platform for full-body multisensory serious-games to teach geometry in primary school

Recent results from psychophysics and developmental psychology show that children have a preferential sensory channel to learn specific concepts. In this work, we explore the possibility of developing and evaluating novel multisensory technologies for deeper learning of arithmetic and geometry. The main novelty of such new technologies comes from the renewed understanding of the role of communication between sensory modalities during development that is that specific sensory systems have specific roles for learning specific concepts. Such understanding suggests that it is possible to open a new teaching/learning channel, personalized for each student based on the child’s sensory skills. Multisensory interactive technologies exploiting full-body movement interaction and including a hardware and software platform to support this approach will be presented and discussed. The platform is part of a more general framework developed in the context of the EU-ICT-H2020 weDRAW Project that aims to develop new multimodal technologies for multisensory serious-games to teach mathematics concepts in the primary school.

[1]  Vesa Välimäki,et al.  Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices , 2011, Journal on Multimodal User Interfaces.

[2]  P. Fridenberg SPECIFIC READING DISABILITY —STREPHOSYMBOLIA , 1928 .

[3]  Shuji Hashimoto,et al.  EyesWeb: Toward Gesture and Affect Recognition in Interactive Dance and Music Systems , 2000, Computer Music Journal.

[4]  David C. Burr,et al.  Young Children Do Not Integrate Visual and Haptic Form Information , 2008, Current Biology.

[5]  Grace M. Fernald,et al.  The Effect of Kinaesthetic Factors in the Development of Word Recognition in the Case of Non-Readers , 1921 .

[6]  Antonio Camurri,et al.  Multimodal Analysis of Expressive Gesture in Music and Dance Performances , 2003, Gesture Workshop.

[7]  Radoslaw Niewiadomski,et al.  Designing Multimodal Interactive Systems using EyesWeb XMI , 2016, SERVE@AVI.

[8]  Fiona N Newell,et al.  Task-Specific, Age Related Effects in the Cross-Modal Identification and Localisation of Objects. , 2015, Multisensory research.

[9]  Antonio Camurri,et al.  Expressive interfaces , 2004, Cognition, Technology & Work.

[10]  Aaron R. Seitz,et al.  Benefits of Stimulus Congruency for Multisensory Facilitation of Visual Learning , 2008, PloS one.

[11]  M. Ernst,et al.  Humans integrate visual and haptic information in a statistically optimal fashion , 2002, Nature.

[12]  Roberto Bresin,et al.  A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities , 2013, PloS one.

[13]  S. T. Orton SPECIFIC READING DISABILITY— STREPHOSYMBOLIA , 1928 .

[14]  S. Zoia,et al.  Rhythm perception and production predict reading abilities in developmental dyslexia , 2014, Front. Hum. Neurosci..

[15]  D. L. Dunphy,et al.  Psychopathology and Education of the Brain-Injured Child , 1948, The Yale Journal of Biology and Medicine.

[16]  Aaron R. Seitz,et al.  Sound Facilitates Visual Learning , 2006, Current Biology.

[17]  Donald A. Norman,et al.  Things That Make Us Smart: Defending Human Attributes In The Age Of The Machine , 1993 .