Studying links between neurophysiology and behavior with the ARBIB autonomous robot

submitted to the 5th International Conference on Cognitive and Neural Systems, Boston, MA, May 30–June 2, 2001. Corresponding author: R. I. Damper Presenting author: R. L. B. French Emails: rid|rlbf98r @ecs.soton.ac.uk Telephone: +44 23 80 594577 FAX: +44 23 80 594498 First choice: robotics (B) Second choice: robotics (T) Studying Links Between Neurophysiology and Behavior with the ARBIB Autonomous Robot R.I. Damper and R.L.B. French Image, Speech and Intelligent Systems Research Group Department of Electronics and Computer Science University of Southampton, Southampton SO17 1BJ, UK Emails: rid|rlbf98r @ecs.soton.ac.uk A goal of neuroscience is to establish the neurophysiological underpinnings of animal behavior. Because of the sheer complexity of vertebrate nervous systems underpinning intelligent behavior, simple invertebrates have been studied in the hope of uncovering low-level neural mechanisms which might act as building blocks for complex behaviors (Hawkins and Kandel 1984). Computational neuroscience (Sejnowski, Koch, and Churchland 1988) and situated robotics (Harnad 1995) together give a powerful way of studying links between low-level neurophysiology and behavior. This is the path we pursue in our work with the ARBIB autonomous robot (Damper, French, and Scutt 2000). Earlier instantiations of ARBIB had several obvious shortcomings which limited biological realism and stood in the way of scaling from simple instinctive behaviors towards more complex, intelligent capabilities. In this paper, we describe recent developments aimed at overcoming these limitations. ARBIB is unusual in its use of a network of spiking neurons, executed during ‘life’ on the Hi-NOON neural simulator (Damper, French, and Scutt, forthcoming; French and Damper, submitted (a)). Rather than using popular learning mechanisms like reinforcement which operate at a relatively high level of abstraction, Hi-NOON synapses are based on low-level models of non-associative habituation and sensitization, and associative classical conditioning. ARBIB’s experience of its environment leads to changes in synaptic strengths (‘learning’) which originally were far too plastic. Learning has been stabilised by adding a simple model of synaptogenesis (French and Damper 2000), so giving ARBIB a long-term memory and solving the stability-plasticity dilemma (Grey Walter 1951; Carpenter and Grossberg 1988). Additionally, ARBIB was given a medium-term memory formed from Grey Walter’s recurrent neural circuit. As previously implemented, ARBIB’s nervous system was manually designed. Hence, its architecture was fixed by the imagination and prejudices of its programmer. Because neuroscience has not yet progressed to the state where we understand the relationships between nervous system structure and intelligent behaviour, it seems likely that manual design will limit the potential for the nervous system to scale-up and arguably hampers progress towards complex and intelligent behaviours. A possible solution to this problem, which we have explored, is to construct ARBIB’s nervous system using the paradigm of evolutionary computation (French and Damper, submitted (b)). The presented paper will demonstrate these developments with examples. The long-term memory was tested by comparing firing activity in bump sensory neurons with and without synaptogenesis. With synaptogenesis, knowledge gained through experience of the environment was stabilized as shown by decreased bump sensor activity. The medium-term memory was tested using a sonar range sensor: ARBIB habituated to a target placed at a constant distance from it, but dishabituated (and triggered activity in the recurrent circuit) to a transitory target that passed within close range of the sensor. Lastly, we successfully evolved a nervous system for an obstacle avoidance competence in a simulated world. The evolved solution also showed robust wall-following behavior which had not been specified in the fitness function. When transferred to a real robot, obstacle avoidance and wall-following were performed as in the simulation.