Attention-based smart-camera for spatial cognition

Bio-inspired attentional vision allows to reduce post-processing to few regions of the visual field. However, the computational complexity of most visual chains remains an issue for an embedded processing system such as a mobile and autonomous robot. We propose in this paper an attention-based smart-camera and neural networks for place recognition in the context of navigation missions in robotics. The smart-camera extracts points of interest based on retina receptive fields at multiple scales and in real-time thanks to a dedicated hardware architecture prototyped onto reconfigurable devices. The place recognition is computed by neural networks, inspired by hippocampal place cells, that code for both the descriptors ('what' information) and the locations ('where' information) of the points of interest provided by the smart-camera. We experimented the addition in the recognition process of a coarse-to-fine approach and obtained improved results during robot localisation experiments.

[1]  Philippe Gaussier,et al.  Robustness Study of a Multimodal Compass Inspired from HD-Cells and Dynamic Neural Fields , 2014, SAB.

[2]  Nathalie Guyader,et al.  Parallel implementation of a spatio-temporal visual saliency model , 2010, Journal of Real-Time Image Processing.

[3]  Lagarde Matthieu,et al.  Distributed real time neural networks in interactive complex systems , 2008, CSTST 2008.

[4]  Philippe Gaussier,et al.  Robustness of Visual Place Cells in Dynamic Indoor and Outdoor Environment , 2006 .

[5]  Laurent Fiack,et al.  Embedded and real-time architecture for bio-inspired vision-based robot navigation , 2015, Journal of Real-Time Image Processing.

[6]  Laurent Rodriguez,et al.  Hardware design of a neural processing unit for bio-inspired computing , 2015, 2015 IEEE 13th International New Circuits and Systems Conference (NEWCAS).

[7]  Philippe Gaussier,et al.  From view cells and place cells to cognitive map learning: processing stages of the hippocampal system , 2002, Biological Cybernetics.

[8]  Philippe Gaussier,et al.  From self-assessment to frustration, a small step toward autonomy in robotic navigation , 2013, Front. Neurorobot..

[9]  Laurent Fiack,et al.  FPGA-based vision perception architecture for robotic missions , 2012 .

[10]  C. Gilbert,et al.  Top-down influences on visual processing , 2013, Nature Reviews Neuroscience.

[11]  Ali Borji,et al.  Computational Modeling of Top-down Visual Attention in Interactive Environments , 2011, BMVC.

[12]  Jocelyn Sérot,et al.  Hardware, Design and Implementation Issues on a Fpga-Based Smart Camera , 2007, 2007 First ACM/IEEE International Conference on Distributed Smart Cameras.

[13]  Ali Borji,et al.  What/Where to Look Next? Modeling Top-Down Visual Attention in Complex Interactive Environments , 2014, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[14]  Philippe Gaussier,et al.  Distributed real time neural networks in interactive complex systems , 2008, CSTST.

[15]  Rae Baxter,et al.  Acknowledgments.-The authors would like to , 1982 .