Feature space trajectory representation for active vision

A new feature space trajectory (FST) description of 3D distorted views of an object is advanced for active vision applications. In an FST, different distorted object views are vertices in feature space. A new eigen-feature space and Fourier transform features are used. Vertices for different adjacent distorted views are connected by straight lines so that an FST is created as the viewpoint changes. Each different object is represented by a distinct FST. An object to be recognized is represented as a point in feature space; the closest FST denotes the class of the object, and the closest line segment on the FST indicates its pose. A new neural network is used to efficiently calculated distances. We discuss its uses in active vision. Apart from an initial estimate of object class and pose, the FST processor can specify where to move the sensor to: confirm close and pose, to grasp the object, or to focus on a specific object part for assembly or inspection. We advanced initial remarks on the number of aspect views needed and which aspect views are needed to represent an object.

[1]  Xinhua Zhuang,et al.  Pose estimation from corresponding point data , 1989, IEEE Trans. Syst. Man Cybern..

[2]  Keinosuke Fukunaga,et al.  Application of the Karhunen-Loève Expansion to Feature Selection and Ordering , 1970, IEEE Trans. Computers.

[3]  R. J. Richards,et al.  Pragmatic approach to visual servoing of robots in a vision-oriented workcell , 1994, Other Conferences.

[4]  David Casasent,et al.  Nonlinear fusion of Gabor wavelet filters for locating objects, edges, and clutter , 1996, Defense + Commercial Sensing.

[5]  Rajesh Shenoy,et al.  Feature space trajectory (FST) neural network for SAR detection, classification, and clutter rejection , 1996, Defense, Security, and Sensing.

[6]  David Casasent,et al.  Feature space trajectory neural net classifier: 8-class distortion-invariant tests , 1995, Other Conferences.

[7]  David Casasent,et al.  Feature spacing trajectory representation and processing for active vision , 1996, Other Conferences.

[8]  Rafael C. González,et al.  Local Determination of a Moving Contrast Edge , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  George G. Lendaris,et al.  Diffraction-pattern sampling for automatic pattern recognition , 1970 .

[10]  Richard Szeliski,et al.  Recovering the Position and Orientation of Free-Form Objects from Image Contours Using 3D Distance Maps , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  David W. Jacobs,et al.  Space and Time Bounds on Indexing 3D Models from 2D Images , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  David P. Casasent,et al.  Classifier and shift-invariant automatic target recognition neural networks , 1995, Neural Networks.

[13]  John Pretlove,et al.  Integration of active vision and intelligent robotics for advanced material handling , 1995, Other Conferences.