Multimodal integration of visual place cells and grid cells for robots navigation

In the present study, we propose a model of multimodal place cells merging visual and proprioceptive primitives. We will briefly introduce a new model of proprioceptive localization, giving rise to the so-called grid cells [Hafting2005], wich are congruent with neurobiological studies made on rodent. Then we show how a simple conditionning rule between both modalities can outperform visual-only driven models. Experiments show that this model enhances robot localization and allows to solve some benchmark problems for real life robotics applications.