Visually-guided adaptive robot (ViGuAR)

A neural modeling platform known as Cog ex Machina1 (Cog) developed in the context of the DARPA SyNAPSE2 program offers a computational environment that promises, in a foreseeable future, the creation of adaptive whole-brain systems subserving complex behavioral functions in virtual and robotic agents. Cog is designed to operate on low-powered, extremely storage-dense memristive hardware3 that would support massively-parallel, scalable computations. We report an adaptive robotic agent, ViGuAR4, that we developed as a neural model implemented on the Cog platform. The neuromorphic architecture of the ViGuAR brain is designed to support visually-guided navigation and learning, which in combination with the path-planning, memory-driven navigation agent - MoNETA5 - also developed at the Neuromorphics Lab at Boston University, should effectively account for a wide range of key features in rodents' navigational behavior.