Non-parametric detection of meaningless distances in high dimensional data

Distance concentration is the phenomenon that, in certain conditions, the contrast between the nearest and the farthest neighbouring points vanishes as the data dimensionality increases. It affects high dimensional data processing, analysis, retrieval, and indexing, which all rely on some notion of distance or dissimilarity. Previous work has characterised this phenomenon in the limit of infinite dimensions. However, real data is finite dimensional, and hence the infinite-dimensional characterisation is insufficient. Here we quantify the phenomenon more precisely, for the possibly high but finite dimensional case in a distribution-free manner, by bounding the tails of the probability that distances become meaningless. As an application, we show how this can be used to assess the concentration of a given distance function in some unknown data distribution solely on the basis of an available data sample from it. This can be used to test and detect problematic cases more rigorously than it is currently possible, and we demonstrate the working of this approach on both synthetic data and ten real-world data sets from different domains.

[1]  Insuk Sohn,et al.  Selecting marker genes for cancer classification using supervised weighted kernel clustering and the support vector machine , 2009, Comput. Stat. Data Anal..

[2]  Gideon Schechtman,et al.  Measure Concentration of Strongly Mixing Processes with Applications , 2007 .

[3]  Chris Giannella New instability results for high-dimensional nearest neighbor search , 2009, Inf. Process. Lett..

[4]  Ata Kabán,et al.  When is 'nearest neighbour' meaningful: A converse theorem and implications , 2009, J. Complex..

[5]  E. Gehan,et al.  The properties of high-dimensional data spaces: implications for exploring gene and protein expression data , 2008, Nature Reviews Cancer.

[6]  Ata Kabán,et al.  On the distance concentration awareness of certain data reduction techniques , 2011, Pattern Recognit..

[7]  Alexandros Nanopoulos,et al.  Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data , 2010, J. Mach. Learn. Res..

[8]  Adam Kowalczyk Classification of Anti-learnable Biological and Synthetic Data , 2007, PKDD.

[9]  Michel Verleysen,et al.  The Concentration of Fractional Distances , 2007, IEEE Transactions on Knowledge and Data Engineering.

[10]  Ming-Syan Chen,et al.  On the Design and Applicability of Distance Functions in High-Dimensional Data Space , 2009, IEEE Trans. Knowl. Data Eng..

[11]  Jonathan Goldstein,et al.  When Is ''Nearest Neighbor'' Meaningful? , 1999, ICDT.

[12]  J. G. Saw,et al.  Chebyshev Inequality With Estimated Mean and Variance , 1984 .

[13]  Sakti Pramanik,et al.  Fast approximate search algorithm for nearest neighbor queries in high dimensions , 1999, Proceedings 15th International Conference on Data Engineering (Cat. No.99CB36337).

[14]  Charu C. Aggarwal,et al.  On the Surprising Behavior of Distance Metrics in High Dimensional Spaces , 2001, ICDT.