Neuroscience-Enabled Complex Visual Scene Understanding

Abstract : We have developed a new Bayesian framework for visual perception. The framework makes use of bottom-up computation heuristics (including salience maps) and top-down knowledge (where high-level hypotheses guide low-level visual processing). As this yields complex computations and a large search space of hypotheses for interpretation of the visual data, we developed a number of new techniques to make the system computationally tractable. In particular, we use probabilistic techniques reminiscent of recent approaches to probabilistic robotics (including MCMC, DDMCMC, and particle filters). In addition, we have completed experiments to elucidate the relationship between cognition and visual processing. This work provides important guidelines for further development of our computational vision frameworks. The key question addressed here is how humans may re-use brain regions evolutionarily associated with some form of processing (e.g., vision) to serve other forms of processing (e.g., algebra, mental memorization and sorting of strings of numbers) which are too recent on an evolutionary time scale to have dedicated brain areas. This report describes both project and many applications to robotics, machine vision, and others.