In this paper, a real-time observation technique for the tele-micro-operation has been proposed with a dynamic focusing lens and a smart vision sensor using the "depth from focus" criteria. On the operation of micro objects, such as the microsurgery, DNA operation and etc., the small depth of a focus on the microscope makes bad observability. For example, if the focus is on the object, the actuator could not be seen with the microscope. On the other hand, if the focus is on the actuator, the object could not be observed. The "all-in-focus image" is useful to observe the micro environments with the microscope. However, one drawback on the all-in-focus image is that there is no information about the depth of objects. It is also important to reconstruct the micro 3D environments, in real-time to actuate the micro objects in the micro virtual environments. This paper discusses, firstly, the criteria of "depth from focus" to achieve the all-in-focus image and the micro 3D environments' reconstruction, simultaneously. Then, a real-time VR micro camera system has been proposed to achieve the micro VR environments with the "depth from focus" criteria, i.e. the combination of the all-in-focus image and the micro 3D construction. This system is constructed with a dynamic focusing lens, which could change the focal distance of lens in high frequency, and a smart vision system, which is capable in capturing and processing the image data in high speed with SIMD architecture.
[1]
Nobuyuki Ohya,et al.
A new, compact and quick-response dynamic focusing lens
,
1997,
Proceedings of International Solid State Sensors and Actuators Conference (Transducers '97).
[2]
Kiyoharu Aizawa,et al.
Acquisition of an all‐focused image by the use of multiple differently focused images
,
1998
.
[3]
Takeo Kanade,et al.
Sensory Attention: Computational Sensor Paradigm for Low-Latency Adaptive Vision
,
1997
.
[4]
Shree K. Nayar,et al.
Shape from Focus
,
1994,
IEEE Trans. Pattern Anal. Mach. Intell..
[5]
Shree K. Nayar,et al.
Minimal operator set for passive depth from defocus
,
1996,
Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[6]
Daniel Raviv,et al.
Novel active-vision-based visual-threat-cue for autonomous navigation tasks
,
1996,
Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[7]
Shree K. Nayar,et al.
Real-time focus range sensor
,
1995,
Proceedings of IEEE International Conference on Computer Vision.