A New Study on Distance Metrics as Similarity Measurement

Distance metric is widely used in similarity estimation. In this paper we find that the most popular Euclidean and Manhattan distance may not be suitable for all data distributions. A general guideline to establish the relation between a distribution model and its corresponding similarity estimation is proposed. Based on maximum likelihood theory, we propose new distance metrics, such as harmonic distance and geometric distance. Because the feature elements may be from heterogeneous sources and usually have different influence on similarity estimation, it is inappropriate to model the distribution as isotropic. We propose a novel boosted distance metric that not only finds the best distance metric that fits the distribution of the underlying elements but also selects the most important feature elements with respect to similarity. The boosted distance metric is tested on fifteen benchmark data sets from the UCI repository and two image retrieval applications. In all the experiments, robust results are obtained based on the proposed methods

[1]  Shih-Fu Chang,et al.  Transform features for texture classification and discrimination in large image databases , 1994, Proceedings of 1st International Conference on Image Processing.

[2]  Nicu Sebe,et al.  Boosting the distance estimation: Application to the K-Nearest Neighbor Classifier , 2006, Pattern Recognit. Lett..

[3]  Moshe Zakai General error criteria (Corresp.) , 1964, IEEE Trans. Inf. Theory.

[4]  K. Fukunaga,et al.  Nonparametric Discriminant Analysis , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Nicu Sebe,et al.  Toward Improved Ranking Metrics , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.