Enabling the Sense of Self in a Dual-Arm Robot

While humans are aware of their body and capabilities, robots are not. To address this, we present in this paper a neural network architecture that enables a dual-arm robot to get a sense of itself in an environment. Our approach is inspired by human self-awareness developmental levels and serves as the underlying building block for a robot to achieve awareness of itself while carrying out tasks in an environment. We assume that a robot has to know itself before interacting with the environment in order to be able to support different robotic tasks. Hence, we implemented a neural network architecture to enable a robot to differentiate its limbs from the environment using visual and proprioception sensory inputs. We demonstrate experimentally that a robot can distinguish itself with an accuracy of 88.7% on average in cluttered environmental settings and under confounding input signals.

[1]  Karl J. Friston The free-energy principle: a unified brain theory? , 2010, Nature Reviews Neuroscience.

[2]  Jun Tani,et al.  An Interpretation of the "Self" From the Dynamical Systems Perspective: A Constructivist Approach , 1998 .

[3]  Pontus Loviken,et al.  Prerequisites for an Artificial Self , 2020, Frontiers in Neurorobotics.

[4]  Carme Torras,et al.  Integrating Task Planning and Interactive Learning for Robots to Work in Human Environments , 2011, IJCAI.

[5]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Carme Torras From the Turing Test to Science Fiction: The Challenges of Social Robotics , 2013, CCIA.

[7]  Isabelle Guyon,et al.  A Scaling Law for the Validation-Set Training-Set Size Ratio , 1997 .

[8]  Misha Denil,et al.  Learning Awareness Models , 2018, ICLR.

[9]  Gordon Cheng,et al.  Yielding Self-Perception in Robots Through Sensorimotor Contingencies , 2017, IEEE Transactions on Cognitive and Developmental Systems.

[10]  Pablo Lanillos,et al.  End-to-End Pixel-Based Deep Active Inference for Body Perception and Action , 2020, 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob).

[11]  Fernando Auat Cheein,et al.  Human–robot interaction in agriculture: A survey and current challenges , 2019, Biosystems Engineering.

[12]  Minoru Asada,et al.  Emergence of mirror neuron system: Immature vision leads to self-other correspondence , 2011, 2011 IEEE International Conference on Development and Learning (ICDL).

[13]  Mehdi Khamassi,et al.  Toward Self-Aware Robots , 2018, Front. Robot. AI.

[14]  B. Scassellati,et al.  A Bayesian Robot That Distinguishes "Self" from "Other" , 2007 .

[15]  P. Rochat Self-Unity as Ground Zero of Learning and Development , 2019, Front. Psychol..

[16]  P. Rochat Five levels of self-awareness as they unfold early in life , 2003, Consciousness and Cognition.

[17]  Axel Cleeremans,et al.  Distinguishing three levels in explicit self-awareness , 2011, Consciousness and Cognition.

[18]  Hui Fang,et al.  Dissecting Deep Learning Networks—Visualizing Mutual Information , 2018, Entropy.

[19]  Gordon Cheng,et al.  Robot self/other distinction: active inference meets neural networks learning in a mirror , 2020, ECAI.

[20]  Gordon Cheng,et al.  Active inference with function learning for robot body perception , 2018 .

[21]  Daniel L. K. Yamins,et al.  Learning to Play with Intrinsically-Motivated Self-Aware Agents , 2018, NeurIPS.

[22]  Hod Lipson,et al.  Task-agnostic self-modeling machines , 2019, Science Robotics.