Real-time sensing of 3D geometric information is essential for autonomous vehicles and robots for detection of obstacles and environment. In most case of autonomously maneuvering robots, the relative motion of 3D sensor and the target object is involved so that accurate 3D geometry, as well as the relative velocity, need to be acquired for safe operation. In this paper, a small form-factor scanner-based 3D sensing system and operating architecture, so-called homodyne mixing method and its experimental verification are presented. Special attention is put on the accuracy improvement with small size realization under relative motion between sensor and objects in the application to autonomous working robots operating under various working environment. In the homodyne mixing method, as the working principle, phase delay induced by the time-of-flight of the amplitude-modulated light wave flying between camera and object is indirectly measured.1 The homodyne mixing method has less computational and hardware complexity than other 3D sensing methods and it is robust to external light and has advantages in miniaturization. However, the homodyne mixing method is sensitive to the relative movement between the sensor and targeting object because it uses continuously modulated light wave. In this paper, an improved light processing methodology is established to tackle this weakness in a moving situation. The presented light processing methodology is robust to the relative movement and has the advantage to control the measurement precision of 3D depth information through variable scanning FOV (Field of View). As the application of suggesting a 3D sensing device and system to recognition in the robot system, we propose a geometry recognition method that extracts typical geometric features of objects from point-cloud data obtained from the 3D sensor. The result shows that the recognition of the geometry of an object is quick and accurate more than previous recognition technology using only an RGB color image.2 By combining the sensor system and object geometry recognition method, we can provide the solution of the 3D object recognition system for autonomous robot operating in an undetermined environment. The experimental verification is presented for the evaluation of the 3D sensing system.
[1]
Joaquim Salvi,et al.
A state of the art in structured light patterns for surface profilometry
,
2010,
Pattern Recognit..
[2]
Fabio Remondino,et al.
TOF Range-Imaging Cameras
,
2013
.
[3]
Haidi Ibrahim,et al.
Literature Survey on Stereo Vision Disparity Map Algorithms
,
2016,
J. Sensors.
[4]
R. Lange,et al.
Solid-state time-of-flight range camera
,
2001
.
[5]
Horst Wildenauer,et al.
Combining Geometry and Local Appearance for Object Detection
,
2010,
2010 20th International Conference on Pattern Recognition.
[6]
Mohammed Bennamoun,et al.
3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey
,
2014,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[7]
泰義 横小路,et al.
IEEE International Conference on Robotics and Automation
,
1992
.
[8]
Cordelia Schmid,et al.
A Performance Evaluation of Local Descriptors
,
2005,
IEEE Trans. Pattern Anal. Mach. Intell..
[9]
Seung-Wan Lee,et al.
Three-dimensional imaging using fast micromachined electro-absorptive shutter
,
2013
.
[10]
Jim Austin,et al.
A Machine-Learning Approach to Keypoint Detection and Landmarking on 3D Meshes
,
2012,
International Journal of Computer Vision.
[11]
Quan Zhou.
2011 IEEE International Conference on Robotics and Automation (ICRA 2011) - Full-day tutorial - Dynamics, characterization and control at the micro/nano scale, Shanghai, China, May 9-13, 2011
,
2011
.