Compact single-shot metalens depth sensors inspired by eyes of jumping spiders

Significance Nature provides diverse solutions to passive visual depth sensing. Evolution has produced vision systems that are highly specialized and efficient, delivering depth-perception capabilities that often surpass those of existing artificial depth sensors. Here, we learn from the eyes of jumping spiders and demonstrate a metalens depth sensor that shares the compactness and high computational efficiency of its biological counterpart. Our device combines multifunctional metalenses, ultrathin nanophotonic components that control light at a subwavelength scale, and efficient computations to measure depth from image defocus. Compared with previous passive artificial depth sensors, our bioinspired design is lightweight, single-shot, and requires a small amount of computation. The integration of nanophotonics and efficient computation establishes a paradigm for design in computational sensing. Jumping spiders (Salticidae) rely on accurate depth perception for predation and navigation. They accomplish depth perception, despite their tiny brains, by using specialized optics. Each principal eye includes a multitiered retina that simultaneously receives multiple images with different amounts of defocus, and from these images, distance is decoded with relatively little computation. We introduce a compact depth sensor that is inspired by the jumping spider. It combines metalens optics, which modifies the phase of incident light at a subwavelength scale, with efficient computations to measure depth from image defocus. Instead of using a multitiered retina to transduce multiple simultaneous images, the sensor uses a metalens to split the light that passes through an aperture and concurrently form 2 differently defocused images at distinct regions of a single planar photosensor. We demonstrate a system that deploys a 3-mm-diameter metalens to measure depth over a 10-cm distance range, using fewer than 700 floating point operations per output pixel. Compared with previous passive depth sensors, our metalens depth sensor is compact, single-shot, and requires a small amount of computation. This integration of nanophotonics and efficient computation brings artificial depth sensing closer to being feasible on millimeter-scale, microwatts platforms such as microrobots and microsensor networks.

[1]  Jason Geng,et al.  Structured-light 3D surface imaging: a tutorial , 2011 .

[2]  Wei Ting Chen,et al.  Achromatic metalens over 60 nm bandwidth in the visible , 2017, 2017 Conference on Lasers and Electro-Optics (CLEO).

[3]  MOHIT GUPTA,et al.  Phasor Imaging , 2015, ACM Trans. Graph..

[4]  A. Kildishev,et al.  Planar Photonics with Metasurfaces , 2013, Science.

[5]  Shanjie Xiao The Software in Hardware , 2011 .

[6]  Robert J. Wood,et al.  Design, fabrication, and modeling of the split actuator microrobotic bee , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Radu Horaud,et al.  Time-of-Flight Cameras , 2012, SpringerBriefs in Computer Science.

[8]  Antonios Gasteratos,et al.  Review of Stereo Vision Algorithms: From Software to Hardware , 2008 .

[9]  Murali Subbarao,et al.  Depth from defocus: A spatial domain approach , 1994, International Journal of Computer Vision.

[10]  W. T. Chen,et al.  Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging , 2016, Science.

[11]  John W. Suh,et al.  Thermally Actuated Omnidirectional Walking Microrobot , 2010, Journal of Microelectromechanical Systems.

[12]  Erez Hasman,et al.  Dielectric gradient metasurface optical elements , 2014, Science.

[13]  C. H. Chu,et al.  Achromatic metalens array for full-colour light-field imaging , 2019, Nature Nanotechnology.

[14]  Gordon Wetzstein,et al.  Deep End-to-End Time-of-Flight Imaging , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[15]  Shree K. Nayar,et al.  Trapping Light for Time of Flight , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[16]  F. Capasso,et al.  High efficiency dielectric metasurfaces at visible wavelengths , 2016, 1603.02735.

[17]  Peter Willett,et al.  What is a tutorial , 2013 .

[18]  Qi Guo,et al.  Focal Track: Depth and Accommodation with Oscillating Lens Deformation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[19]  Federico Capasso,et al.  Single-Layer Metasurface with Controllable Multiwavelength Functions. , 2018, Nano letters.

[20]  Andrea Alù,et al.  Performing Mathematical Operations with Metamaterials , 2014, Science.

[21]  N. Yu,et al.  Flat optics with designer metasurfaces. , 2014, Nature materials.

[22]  M F Land,et al.  Structure of the retinae of the principal eyes of jumping spiders (Salticidae: dendryphantinae) in relation to visual optics. , 1969, The Journal of experimental biology.

[23]  Stephen Neuendorffer,et al.  Demystifying the Lucas-Kanade Optical Flow Algorithm with Vivado HLS , 2009 .

[24]  Radhika Nagpal,et al.  Programmable self-assembly in a thousand-robot swarm , 2014, Science.

[25]  Shree K. Nayar,et al.  Rational Filters for Passive Depth from Defocus , 1998, International Journal of Computer Vision.

[26]  F. Capasso,et al.  Multispectral Chiral Imaging with a Metalens. , 2016, Nano letters.

[27]  David G. Stork,et al.  Special-purpose optics to reduce power dissipation in computational sensing and imaging systems , 2017, 2017 IEEE SENSORS.

[28]  Emma Alexander,et al.  A Theory of Depth From Differential Defocus , 2019 .

[29]  Can Chen,et al.  Depth Recovery from Light Field Using Focal Stack Symmetry , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[30]  Zongfu Yu,et al.  Subwavelength angle-sensing photodetectors inspired by directional hearing in small animals , 2018, Nature Nanotechnology.

[31]  Hisao Tsukamoto,et al.  Depth Perception from Image Defocus in a Jumping Spider , 2012, Science.

[32]  Stephan Rotheneder,et al.  Performance Analysis of a Stereo Matching Implementation in OpenCL , 2018 .

[33]  Iasonas Kokkinos,et al.  Describing Textures in the Wild , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[34]  Paul F. McManamon,et al.  Field Guide to Lidar , 2015 .

[35]  Eero P. Simoncelli,et al.  Range estimation by optical differentiation. , 1998, Journal of the Optical Society of America. A, Optics, image science, and vision.

[36]  Federico Capasso,et al.  A broadband achromatic metalens for focusing and imaging in the visible , 2018, Nature Nanotechnology.

[37]  R. Hoy,et al.  Visual Perception in the Brain of a Jumping Spider , 2014, Current Biology.

[38]  P. Hanrahan,et al.  Light Field Photography with a Hand-held Plenoptic Camera , 2005 .

[39]  William Whittaker,et al.  Epipolar time-of-flight imaging , 2017, ACM Trans. Graph..

[40]  Gordon Wetzstein,et al.  Doppler time-of-flight imaging , 2015, ACM Trans. Graph..

[41]  Jonathan T. Barron,et al.  Aperture Supervision for Monocular Depth Estimation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[42]  Federico Capasso,et al.  Broadband high-efficiency dielectric metasurfaces for the visible spectrum , 2016, Proceedings of the National Academy of Sciences.

[43]  Kiriakos N. Kutulakos,et al.  Optimal Structured Light a la Carte , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[44]  Federico Capasso,et al.  Metasurface Polarization Optics: Independent Phase Control of Arbitrary Orthogonal States of Polarization. , 2017, Physical review letters.

[45]  Shree K. Nayar,et al.  Reflectance and texture of real-world surfaces , 1999, TOGS.

[46]  Gordon Wetzstein,et al.  Computational imaging with multi-camera time-of-flight systems , 2016, ACM Trans. Graph..

[47]  WetzsteinGordon,et al.  Doppler time-of-flight imaging , 2015 .

[48]  Dima Damen,et al.  Recognizing linked events: Searching the space of feasible explanations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[49]  Edward H. Adelson,et al.  The Laplacian Pyramid as a Compact Image Code , 1983, IEEE Trans. Commun..

[50]  Qi Guo,et al.  Focal Flow: Velocity and Depth from Differential Defocus Through Motion , 2017, International Journal of Computer Vision.

[51]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.