Utilizing latency for object recognition in real and artificial neural networks

A consistent analysis of a visual scene requires the recognition of different objects. In vertebrate brains this could be achieved by synchronization of the activity of disjunct nerve cell assemblies.1–5 During such a process cross-talk between spatially adjacent image parts occurs, preventing efficient synchronization. Temporal differences, naturally introduced by stimulus latencies in every sensory system, were utilized in this study to counteract this effect and strongly improve network performance. To this end in our model the image is 'spread out' in time as a function of contrast-dependent visual latencies, and synchronization of cell assemblies occurs without mutual disturbance. The network model requires a direct link between visual latencies and the onset of synchronous oscillations in cortical cells. This was confirmed experimentally.