Nowadays society confronts to a huge volume of information which has to be transformed into knowledge. One of the most relevant aspect of the knowledge extraction is the detection of outliers. Numerous algorithms have been proposed with this purpose. However, not all of them are suitable to deal with very large data sets. In this work, a new approach aimed to detect outliers in very large data sets with a limited execution time is presented. This algorithm visualizes the tuples as N-dimensional particles able to create a potential well around them. Later, the potential created by all the particles is used to discriminate the outliers from the objects composing clusters. Besides, the capacity to be parallelized has been a key point in the design of this algorithm. In this proof-of-concept, the algorithm is tested by using sequential and parallel implementations. The results demonstrate that the algorithm is able to process large data sets with an affordable execution time, so that it overcomes the curse of dimensionality.
[1]
Bertha Guijarro-Berdiñas,et al.
A survey of methods for distributed machine learning
,
2012,
Progress in Artificial Intelligence.
[2]
Emilio Corchado,et al.
Recent trends in intelligent data analysis
,
2014,
Neurocomputing.
[3]
Din J. Wasem,et al.
Mining of Massive Datasets
,
2014
.
[4]
P. Rousseeuw,et al.
High-dimensional computation of the deepest location
,
2000
.
[5]
Jiawei Han,et al.
Data Mining: Concepts and Techniques
,
2000
.
[6]
Ajith Abraham.
Special issue: Hybrid approaches for approximate reasoning
,
2012,
J. Intell. Fuzzy Syst..
[7]
Charu C. Aggarwal,et al.
Outlier Analysis
,
2013,
Springer New York.