Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS)

Because standard cameras sample the scene at constant time intervals, they do not provide any information in the blind time between subsequent frames. However, for many high-speed robotic and vision applications, it is crucial to provide high-frequency measurement updates also during this blind time. This can be achieved using a novel vision sensor, called DAVIS, which combines a standard camera and an asynchronous event-based sensor in the same pixel array. The DAVIS encodes the visual content between two subsequent frames by an asynchronous stream of events that convey pixel-level brightness changes at microsecond resolution. We present the first algorithm to detect and track visual features using both the frames and the event data provided by the DAVIS. Features are first detected in the grayscale frames and then tracked asynchronously in the blind time between frames using the stream of events. To best take into account the hybrid characteristics of the DAVIS, features are built based on large, spatial contrast variations (i.e., visual edges), which are the source of most of the events generated by the sensor. An event-based algorithm is further presented to track the features using an iterative, geometric registration approach. The performance of the proposed method is evaluated on real data acquired by the DAVIS.

[1]  Roland Siegwart,et al.  A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation , 2011, CVPR 2011.

[2]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Daniel Matolin,et al.  A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS , 2011, IEEE Journal of Solid-State Circuits.

[4]  Stephan Schraml,et al.  Spatiotemporal multiple persons tracking using Dynamic Vision Sensor , 2012, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[5]  Ryad Benosman,et al.  Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras , 2015, Neural Computation.

[6]  Heinrich Garn,et al.  Estimation of Vehicle Speed Based on Asynchronous Data from a Silicon Retina Optical Sensor , 2006, 2006 IEEE Intelligent Transportation Systems Conference.

[7]  Daniel Matolin,et al.  A QVGA 143dB dynamic range asynchronous address-event PWM dynamic image sensor with lossless pixel-level video compression , 2010, 2010 IEEE International Solid-State Circuits Conference - (ISSCC).

[8]  Tobi Delbruck,et al.  Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor , 2013, Front. Neurosci..

[9]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[10]  Ryad Benosman,et al.  Asynchronous Event-Based Visual Shape Tracking for Stable Haptic Feedback in Microrobotics , 2012, IEEE Transactions on Robotics.

[11]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[12]  Ryad Benosman,et al.  Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[13]  Tobi Delbrück,et al.  A pencil balancing robot using a pair of AER dynamic vision sensors , 2009, 2009 IEEE International Symposium on Circuits and Systems.

[14]  A.N. Belbachir,et al.  Embedded Vision System for Real-Time Object Tracking using an Asynchronous Transient Vision Sensor , 2006, 2006 IEEE 12th Digital Signal Processing Workshop & 4th IEEE Signal Processing Education Workshop.

[15]  T. Delbruck,et al.  > Replace This Line with Your Paper Identification Number (double-click Here to Edit) < 1 , 2022 .

[16]  Tobi Delbrück,et al.  Design of an RGBW color VGA rolling and global shutter dynamic and active-pixel vision sensor , 2015, 2015 IEEE International Symposium on Circuits and Systems (ISCAS).

[17]  Tobi Delbruck,et al.  A 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor , 2014, IEEE Journal of Solid-State Circuits.

[18]  Ryad Benosman,et al.  Asynchronous event-based corner detection and matching , 2015, Neural Networks.

[19]  Eugenio Culurciello,et al.  Activity-driven, event-based vision sensors , 2010, Proceedings of 2010 IEEE International Symposium on Circuits and Systems.

[20]  Tobi Delbrück,et al.  Fast sensory motor control based on event-based hybrid neuromorphic-procedural system , 2007, 2007 IEEE International Symposium on Circuits and Systems.

[21]  Tobi Delbrück,et al.  A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change , 2006, 2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers.

[22]  Daniel Matolin,et al.  An asynchronous time-based image sensor , 2008, 2008 IEEE International Symposium on Circuits and Systems.

[23]  Tobi Delbrück,et al.  A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor , 2008, IEEE Journal of Solid-State Circuits.

[24]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[25]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..