On the Optimization of a Synaptic Learning Rule

This paper presents a new approach to neural modeling based on the idea of using an automated method to optimize the parameters of a synaptic learning rule The synaptic modi cation rule is considered as a parametric function This function has local inputs and is the same in many neurons We can use standard optimization methods to select appropriate parameters for a given type of task We also present a theoretical analysis permitting to study the generalization property of such parametric learning rules By generalization we mean the possibility for the learning rule to learn to solve new tasks Experiments were performed on three types of problems a biologically inspired circuit for conditioning in Aplysia Boolean functions linearly separable as well as non linearly separable and classi cation tasks The neural network architecture as well as the form and initial parameter values of the synaptic learning function can be designed using a priori knowledge