Distinctive texture features from perspective-invariant keypoints

In this paper, we present an algorithm to detect and describe features of surface textures, similar to SIFT and SURF. In contrast to approaches solely based on the intensity image, it uses depth information to achieve invariance with respect to arbitrary changes of the camera pose. The algorithm works by constructing a scale space representation of the image which conserves the real-world size and shape of texture features. In this representation, keypoints are detected using a Difference-of-Gaussian response. Normal-aligned texture descriptors are then computed from the intensity gradient, normalizing the rotation around the normal using a gradient histogram. We evaluate our approach on a dataset of planar textured scenes and show that it outperforms SIFT and SURF under large viewpoint changes.

[1]  R. Horaud,et al.  Surface feature detection and description with applications to mesh matching , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Joel McCormack,et al.  Feline: fast elliptical lines for anisotropic texture mapping , 1999, SIGGRAPH.

[3]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[4]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[5]  Jan-Michael Frahm,et al.  3D model matching with Viewpoint-Invariant Patches (VIP) , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Cordelia Schmid,et al.  An Affine Invariant Interest Point Detector , 2002, ECCV.

[7]  Reinhard Koch,et al.  Perspectively Invariant Normal Features , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[8]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .