SHREC'13 Track: Retrieval of Objects Captured with Low-Cost Depth-Sensing Cameras

The SHREC'13 Track: Retrieval of Objects Captured with Low-Cost Depth-Sensing Cameras is a first attempt at evaluating the effectiveness of 3D shape retrieval algorithms in low fidelity model databases, such as the ones captured with commodity depth cameras. Both target and query set are composed by objects captured with a Kinect camera and the objective is to retrieve the models in the target set who were considered relevant by a human-generated ground truth. Given how widespread such devices are, and how easy it is becoming for an everyday user to capture models in his household, the necessity of algorithms for these new types of 3D models is also increasing. Three groups have participated in the contest, providing rank lists for the set of queries, which is composed of 12 models from the target set.

[1]  Pietro Perona,et al.  A Bayesian hierarchical model for learning natural scene categories , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[2]  Martha Elizabeth Shenton,et al.  Laplace-Beltrami eigenvalues and topological features of eigenfunctions for statistical shape analysis , 2009, Comput. Aided Des..

[3]  Thomas A. Funkhouser,et al.  The Princeton Shape Benchmark , 2004, Proceedings Shape Modeling Applications, 2004..

[4]  Bo Li,et al.  3D model retrieval using hybrid features and class information , 2013, Multimedia Tools and Applications.

[5]  David Picard,et al.  Improving image similarity with vectors of locally aggregated tensors , 2011, 2011 18th IEEE International Conference on Image Processing.

[6]  Masaki Aono,et al.  Multi-Fourier spectra descriptor and augmentation with spectral clustering for 3D shape retrieval , 2009, The Visual Computer.

[7]  J. Fleiss Measuring nominal scale agreement among many raters. , 1971 .

[8]  Andrew W. Fitzgibbon,et al.  KinectFusion: Real-time dense surface mapping and tracking , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[9]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[10]  Sander Oude Elberink,et al.  Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications , 2012, Sensors.

[11]  Maks Ovsjanikov,et al.  Article 1 A. M. Bronstein Shape Google: Geometric Words and Expressions for (20 pages) M. M. Bronstein Invariant Shape Retrieval , 2011 .

[12]  Leonidas J. Guibas,et al.  Shape google: Geometric words and expressions for invariant shape retrieval , 2011, TOGS.

[13]  Thomas Mensink,et al.  Improving the Fisher Kernel for Large-Scale Image Classification , 2010, ECCV.

[14]  Thomas A. Funkhouser,et al.  The Princeton Shape Benchmark (Figures 1 and 2) , 2004, Shape Modeling International Conference.

[15]  Alireza Khotanzad,et al.  Invariant Image Recognition by Zernike Moments , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Leonidas J. Guibas,et al.  A concise and provably informative multi-scale signature based on heat diffusion , 2009 .

[17]  Ryutarou Ohbuchi,et al.  SHREC'12 Track: Generic 3D Shape Retrieval , 2012, 3DOR@Eurographics.

[18]  Iasonas Kokkinos,et al.  Scale-invariant heat kernel signatures for non-rigid shape recognition , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.