LARa: A Robotic Framework for Human-Robot Interaction on Indoor Environments

Human-robot interaction has received increasing attention in the last decades, since robots may act as both helpers and companions for the elderly and impaired people, which is particularly important for an aging population. Robot platforms for this aim may rely on data provided by several different sensors, in order to act effectively and provide natural and engaging interactions. In this paper, we present the LARa framework, comprised by a robot and a software library. Several relevant behaviors have been provided and encapsulated, such as robot face, speech interaction, recognition of faces, facial expressions and objects, and robot navigation. A control architecture was built, in order to integrate these modules into a functional robotic system. The developed framework may serve as a stating point for other research regarding robots in indoor environments. Results obtained within the scope of each module show a good performance of the proposed control architecture.

[1]  Emad Barsoum,et al.  Training deep networks for facial expression recognition with crowd-sourced label distribution , 2016, ICMI.

[2]  J. Russell Pancultural Aspects of the Human Conceptual Organization of Emotions , 1983 .

[3]  Yi Li,et al.  R-FCN: Object Detection via Region-based Fully Convolutional Networks , 2016, NIPS.

[4]  Alessandro Saffiotti,et al.  The PEIS-Ecology project: Vision and results , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Junping Du,et al.  Reliable Crowdsourcing and Deep Locality-Preserving Learning for Expression Recognition in the Wild , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Weihua Sheng,et al.  Detection of privacy-sensitive situations for social robots in smart homes , 2016, 2016 IEEE International Conference on Automation Science and Engineering (CASE).

[7]  Dietrich Paulus,et al.  A ROS-Based System for an Autonomous Service Robot , 2016 .

[8]  Cynthia Breazeal,et al.  Emotion and sociable humanoid robots , 2003, Int. J. Hum. Comput. Stud..

[9]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Raphael Memmesheimer,et al.  RoboCup 2016 – homer@UniKoblenz (Germany) , 2018 .

[12]  Sergey Ioffe,et al.  Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.

[13]  Dietrich Paulus,et al.  RoboCup@Home: Summarizing achievements in over eleven years of competition , 2018, 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC).

[14]  Stefano Chessa,et al.  Robotic Ubiquitous Cognitive Ecology for Smart Homes , 2015, Journal of Intelligent & Robotic Systems.

[15]  Javier Ruiz-del-Solar,et al.  RoboCup@Home: Analysis and results of evolving competitions for domestic and service robots , 2015, Artif. Intell..

[16]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[17]  Stefan Winkler,et al.  Deep Learning for Emotion Recognition on Small Datasets using Transfer Learning , 2015, ICMI.

[18]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Davide Bacciu,et al.  A Benchmark Dataset for Human Activity Recognition and Ambient Assisted Living , 2016, ISAmI.

[20]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.