PARC: A Plan and Activity Recognition Component for Assistive Robots

Mobile robot assistants have many applications, such as helping people in their daily living activities. These robots have to detect and recognize the actions and goals of the humans they are assisting. While there are several wide-spread plan and activity recognition solutions for controlled environments with many built-in sensors, like smart-homes, there is a lack of such systems for mobile robots operating in open settings, such as an apartment. We propose a module for the recognition of activities and goals for daily living by mobile robots, in real time and for complex activities. Our approach recognizes human-object interaction using an RGB-D camera to infer low-level actions which are sent to a goal recognition algorithm. Results show that our approach is both in real time and requires little computational resources, which facilitates its deployment on a mobile and low-cost robotics platform.

[1]  Jean Massardi,et al.  Error-Tolerant Anytime Approach to Plan Recognition Using a Particle Filter , 2019, ICAPS.

[2]  Jake K. Aggarwal,et al.  Robot-centric Activity Recognition from First-Person RGB-D Videos , 2015, 2015 IEEE Winter Conference on Applications of Computer Vision.

[3]  Olatunji Ruwase,et al.  Optimizing CNNs on Multicores for Scalability, Performance and Goodput , 2017, ASPLOS.

[4]  Danica Kragic,et al.  Visual object-action recognition: Inferring object affordances from human demonstration , 2011, Comput. Vis. Image Underst..

[5]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  James J. Gibson,et al.  The Ecological Approach to Visual Perception: Classic Edition , 2014 .

[7]  Jian-Huang Lai,et al.  Jointly Learning Heterogeneous Features for RGB-D Activity Recognition , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Robert P. Goldman,et al.  A probabilistic plan recognition algorithm based on plan tree grammars , 2009, Artif. Intell..

[9]  Daijin Kim,et al.  Shape and Motion Features Approach for Activity Tracking and Recognition from Kinect Video Camera , 2015, 2015 IEEE 29th International Conference on Advanced Information Networking and Applications Workshops.

[10]  Daisuke Deguchi,et al.  Kitchen Scene Context Based Gesture Recognition: A Contest in ICPR2012 , 2012, WDIA.

[11]  Bart Selman,et al.  Unstructured human activity detection from RGBD images , 2011, 2012 IEEE International Conference on Robotics and Automation.

[12]  David W. Aha,et al.  Case-Based Plan Recognition Under Imperfect Observability , 2015, ICCBR.

[13]  Ronald Poppe,et al.  A survey on vision-based human action recognition , 2010, Image Vis. Comput..

[14]  Ya'akov Gal,et al.  SLIM: Semi-Lazy Inference Mechanism for Plan Recognition , 2016, IJCAI.

[15]  Ben J. A. Kröse,et al.  Accompany: Acceptable robotiCs COMPanions for AgeiNG Years — Multidimensional aspects of human-system interactions , 2013, 2013 6th International Conference on Human System Interactions (HSI).

[16]  Hema Swetha Koppula,et al.  Anticipating Human Activities Using Object Affordances for Reactive Robotic Response , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Ali Farhadi,et al.  YOLOv3: An Incremental Improvement , 2018, ArXiv.

[18]  Chris D. Nugent,et al.  From Activity Recognition to Intention Recognition for Assisted Living Within Smart Homes , 2017, IEEE Transactions on Human-Machine Systems.

[19]  Roger Leitzke Granada,et al.  Hybrid Activity and Plan Recognition for Video Streams , 2017, AAAI Workshops.

[20]  Robert P. Goldman,et al.  Plan, Activity, and Intent Recognition: Theory and Practice , 2014 .

[21]  Muttukrishnan Rajarajan,et al.  Anomalies Detection in Smart-Home Activities , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).

[22]  Nate Blaylock,et al.  Generating Artificial Corpora for Plan Recognition , 2005, User Modeling.

[23]  Joelle Pineau,et al.  Pearl: A Mobile Robotic Assistant for the Elderly , 2002 .

[24]  J.K. Aggarwal,et al.  Human activity analysis , 2011, ACM Comput. Surv..

[25]  Teresa Zielinska,et al.  Predicting Human Actions Taking into Account Object Affordances , 2018, Journal of Intelligent & Robotic Systems.

[26]  Christopher W. Geib,et al.  Delaying Commitment in Plan Recognition Using Combinatory Categorial Grammars , 2009, IJCAI.

[27]  Froduald Kabanza,et al.  Controlling the Hypothesis Space in Probabilistic Plan Recognition , 2013, IJCAI.

[28]  Kevin Bouchard,et al.  Exploiting Passive RFID Technology for Activity Recognition in Smart Homes , 2015, IEEE Intelligent Systems.

[29]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[30]  Darius Burschka,et al.  Predicting human intention in visual observations of hand/object interactions , 2013, 2013 IEEE International Conference on Robotics and Automation.

[31]  Martin Hägele,et al.  Robotic home assistant Care-O-bot® 3 - product vision and innovation platform , 2009, 2009 IEEE Workshop on Advanced Robotics and its Social Impacts.