Hybrid architecture for the sensorimotor representation of spatial configurations

We investigate the hypothesis that the main representation which underlies human navigation is not static and map-like, but rather is of an inherently sensorimotor nature, i.e. results from a combination of sensory features and motor actions. This is suggested by recent psychological and neurobiological results, and receives further support from an own study of human navigation in manipulated virtual reality environments. To investigate the presumed sensorimotor representation we design a hybrid architecture which integrates a bottom-up processing of sensorimotor features with a top-down reasoning that is based on the principle of maximum information gain. This architecture is implemented in an agent that operates in a VR environment and is able to use a minimum number of exploratory actions to orient itself within this environment.