An Efficient and Scalable Collection of Fly-Inspired Voting Units for Visual Place Recognition in Changing Environments

State-of-the-art visual place recognition performance is currently being achieved utilizing deep learning based approaches. Despite the recent efforts in designing lightweight convolutional neural network based models, these can still be too expensive for the most hardware restricted robot applications. Low-overhead visual place recognition techniques would not only enable platforms equipped with low-end, cheap hardware but also reduce computation on more powerful systems, allowing these resources to be allocated for other navigation tasks. In this work, our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations. Our first contribution is DrosoNet, an exceptionally compact model inspired by the odor processing abilities of the fruit fly, Drosophila melanogaster. Our second and main contribution is a voting mechanism that leverages multiple small and efficient classifiers to achieve more robust and consistent visual place recognition compared to a single one. We use DrosoNet as the baseline classifier for the voting mechanism and evaluate our models on five benchmark datasets, assessing moderate to extreme appearance changes and small to moderate viewpoint variations. We then compare the proposed algorithms to state-of-the-art methods, both in terms ofarea under the precision-recall curve results and computational efficiency.

[1]  Niko Sünderhauf,et al.  Are We There Yet? Challenging SeqSLAM on a 3000 km Journey Across All Four Seasons , 2013 .

[2]  Yuhui Xu,et al.  DNQ: Dynamic Network Quantization , 2018, 2019 Data Compression Conference (DCC).

[3]  James A. R. Marshall,et al.  The Green Brain Project - Developing a Neuromimetic Robotic Honeybee , 2013, Living Machines.

[4]  Peter I. Corke,et al.  Visual Place Recognition: A Survey , 2016, IEEE Transactions on Robotics.

[5]  Michael Milford,et al.  LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics , 2018, Robotics: Science and Systems.

[6]  Wolfram Burgard,et al.  Robust Visual Robot Localization Across Seasons Using Network Flows , 2014, AAAI.

[7]  Mark Goadrich,et al.  The relationship between Precision-Recall and ROC curves , 2006, ICML.

[8]  Margarita Chli,et al.  Viewpoint-Tolerant Place Recognition Combining 2D and 3D Information for UAV Navigation , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[10]  J. Zeil,et al.  Mapping the navigational knowledge of individually foraging ants, Myrmecia croslandi , 2013, Proceedings of the Royal Society B: Biological Sciences.

[11]  Ananth Ranganathan,et al.  Towards illumination invariance for visual localization , 2013, 2013 IEEE International Conference on Robotics and Automation.

[12]  Dmitri B. Chklovskii,et al.  A clustering neural network model of insect olfaction , 2017, bioRxiv.

[13]  Klaus D. McDonald-Maier,et al.  Visual Place Recognition for Aerial Robotics: Exploring Accuracy-Computation Trade-off for Local Image Descriptors , 2019, 2019 NASA/ESA Conference on Adaptive Hardware and Systems (AHS).

[14]  Ajay Narendra,et al.  A Hybrid Compact Neural Architecture for Visual Place Recognition , 2020, IEEE Robotics and Automation Letters.

[15]  Gordon Wyeth,et al.  RatSLAM: a hippocampal model for simultaneous localization and mapping , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[16]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[17]  David M. W. Powers,et al.  Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation , 2011, ArXiv.

[18]  Sanjoy Dasgupta,et al.  A neural algorithm for a fundamental computing problem , 2017 .

[19]  Hugh F. Durrant-Whyte,et al.  Simultaneous Localization, Mapping and Moving Object Tracking , 2007, Int. J. Robotics Res..

[20]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[21]  Michael B. Reiser,et al.  Visual Place Learning in Drosophila melanogaster , 2011, Nature.

[22]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[23]  Michael Milford,et al.  Multi-Process Fusion: Visual Place Recognition Using Multiple Image Processing Methods , 2019, IEEE Robotics and Automation Letters.

[24]  Michael Milford,et al.  Convolutional Neural Network-based Place Recognition , 2014, ICRA 2014.