Decision making in assistive environments using multimodal observations

An assistive environment is a smart domestic space based on pervasive computing to support the elderly and disabled. Unlike sensors, which can only provide passive monitoring, a robot can be an active element to improve the quality of life for the human. In this paper, we propose an active service of the robot in assistive environments, to help human in the case of emergency situation. It works on a hierarchical partially observable Markov decision process (POMDP). The multimodal observation series are used in the decision and evaluation process. An active robot is a kind of robot that can provide a preferable and necessary active service to the human. This is used in our emergency response system (ERS) to deal with the emergency situations, such as an older adult falls down or emergency diseases. The purpose of multimodal observations is to guarantee the precision of report for the emergency situations. Four observation sources are introduced in this paper: the vision recognition, the voice recognition, the physical input devices and the foreign systems. For each observation source, there are two observation series. Multiple information sources give the agent more opportunities to learn from the real world, so as to make more reasonable predictions, evaluations and decisions.

[1]  Wolfram Burgard,et al.  Experiences with an Interactive Museum Tour-Guide Robot , 1999, Artif. Intell..

[2]  Yurong Xu,et al.  Mobile device protection from loss and capture , 2008, PETRA '08.

[3]  Leslie Pack Kaelbling,et al.  Learning Policies for Partially Observable Environments: Scaling Up , 1997, ICML.

[4]  Craig Boutilier,et al.  Decision-Theoretic Planning: Structural Assumptions and Computational Leverage , 1999, J. Artif. Intell. Res..

[5]  Jesse Hoey,et al.  Semi-supervised learning of a POMDP model of Patient-Caregiver Interactions , 2005 .

[6]  Nikos A. Vlassis,et al.  Perseus: Randomized Point-based Value Iteration for POMDPs , 2005, J. Artif. Intell. Res..

[7]  Gaurav S. Sukhatme,et al.  Mobile robot navigation using a sensor network , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[8]  Joelle Pineau,et al.  A Hierarchical Approach to POMDP Planning and Execution , 2004 .

[9]  María Malfaz,et al.  Multimodal Human-Robot Interaction Framework for a Personal Robot , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[10]  Seong-Whan Lee Automatic gesture recognition for intelligent human-robot interaction , 2006, 7th International Conference on Automatic Face and Gesture Recognition (FGR06).

[11]  Alex Mihailidis,et al.  The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home , 2004, IEEE Transactions on Information Technology in Biomedicine.

[12]  R. Simmons,et al.  Grace and George : Social Robots at AAAI , 2004 .

[13]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[14]  Rüdiger Dillmann,et al.  Reasoning for a multi-modal service robot considering uncertainty in human-robot interaction , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[15]  Seong-Whan Lee,et al.  A full-body gesture database for automatic gesture recognition , 2006, 7th International Conference on Automatic Face and Gesture Recognition (FGR06).

[16]  Matthew Turk,et al.  View-based interpretation of real-time optical flow for gesture recognition , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[17]  Hatice Gunes,et al.  Face and Body Gesture Recognition for a Vision-Based Multimodal Analyzer , 2004, VIP.

[18]  Sebastian Thrun,et al.  A Gesture Based Interface for Human-Robot Interaction , 2000, Auton. Robots.

[19]  Oliver Brdiczka,et al.  Learning to detect user activity and availability from a variety of sensor data , 2004, Second IEEE Annual Conference on Pervasive Computing and Communications, 2004. Proceedings of the.

[20]  Steve J. Young,et al.  Partially observable Markov decision processes for spoken dialog systems , 2007, Comput. Speech Lang..

[21]  Andreas Savvides,et al.  Extracting spatiotemporal human activity patterns in assisted living using a home sensor network , 2008, PETRA '08.

[22]  Paul Lamere,et al.  Sphinx-4: a flexible open source framework for speech recognition , 2004 .

[23]  Aude Billard,et al.  Robota: Clever toy and educational tool , 2003, Robotics Auton. Syst..

[24]  Scott S. Snibbe,et al.  Experiences with Sparky, a Social Robot , 2002 .