Efficient Full-Field Operational Modal Analysis Using Neuromorphic Event-Based Imaging

As an alternative to traditional sensing methods, video camera measurements offer a non-contact, cost-efficient, and full-field platform for operational modal analysis. However, video cameras record large amounts of redundant background data causing video processing to be computationally inefficient. This work explores the use of a silicon retina imager to perform operational modal analysis. The silicon retina provides an efficient alternative to standard frame-based video cameras. Modeling the biological retina, each silicon retina pixel independently and asynchronously records changes in intensity. By only recording intensity change events, all motion information is captured without recording redundant background information. This asynchronous event-based data representation allows motion to be captured on the microsecond scale, equivalent to traditional cameras operating at thousands of frames per second. With minimal data storage and processing requirements, the silicon retina shows promise for real-time vibration measurement and structural control applications. This study takes the first step toward these applications by adapting existing video frame-based modal analysis techniques to operate on event-based silicon retina measurements. Specifically, blind source separation and video motion processing techniques are used to automatically output vibration parameters from silicon retina data. The developed method is demonstrated on a cantilever beam.

[1]  Tobi Delbruck,et al.  Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor , 2013, Front. Neurosci..

[2]  Yiannis Aloimonos,et al.  Bio-inspired Motion Estimation with Event-Driven Sensors , 2015, IWANN.

[3]  Peter Avitabile,et al.  Comparison of FRF measurements and mode shapes determined using optically image based, laser, and accelerometer measurements , 2011 .

[4]  Demeter G. Fertis,et al.  Mechanical And Structural Vibrations , 1995 .

[5]  David J. Fleet,et al.  Computation of component image velocity from local phase information , 1990, International Journal of Computer Vision.

[6]  Tobi Delbrück,et al.  A 128$\times$ 128 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor , 2008, IEEE Journal of Solid-State Circuits.

[7]  Misha Mahowald,et al.  A silicon model of early visual processing , 1993, Neural Networks.

[8]  Frédo Durand,et al.  Phase-based video motion processing , 2013, ACM Trans. Graph..

[9]  Yongchao Yang,et al.  Blind modal identification of output‐only structures in time‐domain based on complexity pursuit , 2013 .

[10]  Charles R. Farrar,et al.  Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification , 2017 .

[11]  David J. Ewins,et al.  Modal Testing: Theory, Practice, And Application , 2000 .

[12]  Charles R. Farrar,et al.  Reference-free detection of minute, non-visible, damage using full-field, high-resolution mode shapes output-only identified from digital videos of structures , 2018 .

[13]  Tobi Delbrück,et al.  An embedded AER dynamic vision sensor for low-latency pole balancing , 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.

[14]  E. Allier,et al.  Spectral analysis of level crossing sampling scheme , 2005 .

[15]  J. Mottershead,et al.  Frequency response functions of shape features from full-field vibration measurements using digital image correlation , 2012 .

[16]  Frédo Durand,et al.  Modal identification of simple structures with high-speed video using motion magnification , 2015 .