pyDVS: An extensible, real-time Dynamic Vision Sensor emulator using off-the-shelf hardware

Vision is one of our most important senses, a vast amount of information is perceived through our eyes. Neuroscientists have performed many studies using vision as input to their experiments. Computational neuroscientists have typically used a brightness-to-rate encoding to use images as spike-based visual sources for its natural mapping. Recently, neuromorphic Dynamic Vision Sensors (DVSs) were developed and, while they have excellent capabilities, they remain scarce and relatively expensive. We propose a visual input system inspired by the behaviour of a DVS but using a conventional digital camera as a sensor and a PC to encode the images. By using readily-available components, we believe most scientists would have access to a realistic spiking visual input source. While our primary goal is to provide systems with a live real-time input, we have also been successful in transcoding well established image and video databases into spike train representations. Our main contribution is a DVS emulator framework which can be extended, as we demonstrate by adding local inhibitory behaviour, adaptive thresholds and spike-timing encoding.

[1]  Pierre Yger,et al.  PyNN: A Common Interface for Neuronal Network Simulators , 2008, Front. Neuroinform..

[2]  Alan A. Stocker Analog VLSI Implementation , 2006 .

[3]  Wolf Singer,et al.  Time as coding space? , 1999, Current Opinion in Neurobiology.

[4]  Eugene M. Izhikevich,et al.  Polychronization: Computation with Spikes , 2006, Neural Computation.

[5]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[6]  Steffen Beich,et al.  Digital Video And Hdtv Algorithms And Interfaces , 2016 .

[7]  Jon Hamkins Pulse Position Modulation , 2011 .

[8]  T. Poggio,et al.  Hierarchical models of object recognition in cortex , 1999, Nature Neuroscience.

[9]  E. L. Zuch Where and when to use which data converter: A broad shopping list of monolithic, hybrid, and discrete-component devices is available; the author helps select the most appropriate , 1977, IEEE Spectrum.

[10]  T. Delbruck,et al.  > Replace This Line with Your Paper Identification Number (double-click Here to Edit) < 1 , 2022 .

[11]  Tobi Delbrück,et al.  Live demonstration: Behavioural emulation of event-based vision sensors , 2012, 2012 IEEE International Symposium on Circuits and Systems.

[12]  Donald L. Snyder,et al.  Random Point Processes in Time and Space , 1991 .

[13]  Tobi Delbrück,et al.  A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor , 2008, IEEE Journal of Solid-State Circuits.

[14]  Stephen B. Furber,et al.  Biologically Inspired Means for Rank-Order Encoding Images: A Quantitative Analysis , 2010, IEEE Transactions on Neural Networks.

[15]  Bernabé Linares-Barranco,et al.  A 3.6 $\mu$ s Latency Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor , 2011, IEEE Journal of Solid-State Circuits.

[16]  Jim D. Garside,et al.  Overview of the SpiNNaker System Architecture , 2013, IEEE Transactions on Computers.

[17]  Mohammed Ismail,et al.  Analog VLSI Implementation of Neural Systems , 2011, The Kluwer International Series in Engineering and Computer Science.

[18]  Tobias Delbrück,et al.  Frame-free dynamic digital vision , 2008 .