A Probabilistic Appearance Representation and Its Application to Surprise Detection in Cognitive Robots

In this work, we present a novel probabilistic appearance representation and describe its application to surprise detection in the context of cognitive mobile robots. The luminance and chrominance of the environment are modeled by Gaussian distributions which are determined from the robot's observations using Bayesian inference. The parameters of the prior distributions over the mean and the precision of the Gaussian models are stored at a dense series of viewpoints along the robot's trajectory. Our probabilistic representation provides us with the expected appearance of the environment and enables the robot to reason about the uncertainty of the perceived luminance and chrominance. Hence, our representation provides a framework for the detection of surprising events, which facilitates attentional selection. In our experiments, we compare the proposed approach with surprise detection based on image differencing. We show that our surprise measure is a superior detector for novelty estimation compared to the measure provided by image differencing.

[1]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[2]  John R Anderson,et al.  An integrated theory of the mind. , 2004, Psychological review.

[3]  Friedrich Fraundorfer,et al.  Topological mapping, localization and navigation using image collections , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Pat Langley,et al.  A Unified Cognitive Architecture for Physical Agents , 2006, AAAI.

[5]  Frank Dellaert,et al.  Bayesian surprise and landmark detection , 2009, 2009 IEEE International Conference on Robotics and Automation.

[6]  Daniel P. Huttenlocher,et al.  Efficient Belief Propagation for Early Vision , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[7]  Pat Hanrahan,et al.  Ray tracing on a connection machine , 1988, ICS '88.

[8]  Robert T. Collins,et al.  A space-sweep approach to true multi-image matching , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[9]  Jürgen Schmidhuber,et al.  Self-Motivated Development Through Rewards for Predictor Errors / Improvements , 2005, AAAI 2005.

[10]  Sameer Singh,et al.  Novelty detection: a review - part 1: statistical approaches , 2003, Signal Process..

[11]  Andrew G. Barto,et al.  Intrinsically Motivated Reinforcement Learning: A Promising Framework for Developmental Robot Learning , 2005 .

[12]  Pierre-Yves Oudeyer,et al.  Intrinsic Motivation Systems for Autonomous Mental Development , 2007, IEEE Transactions on Evolutionary Computation.

[13]  Alois Knoll,et al.  Multi Joint Action in CoTeSys --- Setup and Challenges , 2010 .

[14]  Harry Shum,et al.  Image-based rendering , 2006, Found. Trends Comput. Graph. Vis..

[15]  K. S. Arun,et al.  Least-Squares Fitting of Two 3-D Point Sets , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Andrea Fusiello Image-based Rendering * , 2003 .

[17]  Binoy Pinto,et al.  Speeded Up Robust Features , 2011 .

[18]  S. Hochreiter,et al.  REINFORCEMENT DRIVEN INFORMATION ACQUISITION IN NONDETERMINISTIC ENVIRONMENTS , 1995 .

[19]  Darius Burschka,et al.  Visual homing and surprise detection for cognitive mobile robots using image-based environment representations , 2009, 2009 IEEE International Conference on Robotics and Automation.

[20]  Pierre Baldi,et al.  Bayesian surprise attracts human attention , 2005, Vision Research.

[21]  Xiao Huang,et al.  Novelty and Reinforcement Learning in the Value System of Developmental Robots , 2002 .

[22]  Pierre Baldi,et al.  A principled approach to detecting surprising events in video , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[23]  Sebastian Thrun,et al.  Robotic mapping: a survey , 2003 .

[24]  J. M. Anderson,et al.  Responses of human frontal cortex to surprising events are predicted by formal associative learning theory , 2001, Nature Neuroscience.

[25]  Richard Szeliski,et al.  High-quality video view interpolation using a layered representation , 2004, SIGGRAPH 2004.

[26]  S. Engel,et al.  Colour tuning in human visual cortex measured with functional magnetic resonance imaging , 1997, Nature.

[27]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[28]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[29]  Harry Shum,et al.  Plenoptic sampling , 2000, SIGGRAPH.

[30]  Jitendra Malik,et al.  Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach , 1996, SIGGRAPH.

[31]  Karl J. Friston,et al.  A Dual Role for Prediction Error in Associative Learning , 2008, Cerebral cortex.

[32]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .