Distributed Decision Tree

Decision Tree is a tree-structured plan of a set of attributes to test in order to predict the output. MapReduce and Spark is a programming model used for processing data on a distributed file system. In this paper, MapReduce and Spark implementation of Decision Tree is named as Distributed Decision Tree (DDT) and Spark Tree (ST) respectively. Decision Tree (DT), Ensemble of Trees (BT), DDT and ST are compared over accuracy, size of tree and number of leaves of tree(s) generated. DDT and ST is empirically evaluated over 10 selected datasets. Using DDT, size of tree is reduced by 71% and 82% as compared to DT and BT respectively. In case of ST size of tree is reduced by 48% and 67% as compared to DT and BT. Number of leaves is reduced by 70% and 81% with respect to DT and BT, respectively using DDT. Whereas, it is reduced by 45% and 65% with respect to DT and BT in case of ST. We evaluated DDT and ST using Yahoo! Webscope dataset. Our evaluation shows improvement in accuracy as well as reduction in size of tree and number of leaves. Hence, DDT and ST outperformed DT and BT with respect to size of tree and number of leaves with adequate classification accuracy.

[1]  Jiawei Han,et al.  Data Mining: Concepts and Techniques , 2000 .

[2]  Indranil Palit,et al.  Scalable and Parallel Boosting with MapReduce , 2012, IEEE Transactions on Knowledge and Data Engineering.

[3]  Salvatore J. Stolfo,et al.  The application of AdaBoost for distributed, scalable and on-line learning , 1999, KDD '99.

[4]  J. Ross Quinlan,et al.  Improved Use of Continuous Attributes in C4.5 , 1996, J. Artif. Intell. Res..

[5]  Tom White,et al.  Hadoop: The Definitive Guide , 2009 .

[6]  Wei Dai,et al.  A MapReduce Implementation of C4.5 Decision Tree Algorithm , 2014 .

[7]  Ausif Mahmood,et al.  Highly Scalable, Parallel and Distributed AdaBoost Algorithm using Light Weight Threads and Web Services on a Network of Multi-Core Machines , 2013, ArXiv.

[8]  Zoran Obradovic,et al.  Boosting Algorithms for Parallel and Distributed Learning , 2022 .

[9]  Roberto J. Bayardo,et al.  PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce , 2009, Proc. VLDB Endow..

[10]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[11]  Sanjay Ghemawat,et al.  MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.

[12]  Rakesh Agrawal,et al.  SPRINT: A Scalable Parallel Classifier for Data Mining , 1996, VLDB.

[13]  อนิรุธ สืบสิงห์,et al.  Data Mining Practical Machine Learning Tools and Techniques , 2014 .

[14]  J. Ross Quinlan,et al.  Decision trees and decision-making , 1990, IEEE Trans. Syst. Man Cybern..

[15]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[16]  Zhaohui Zheng,et al.  Stochastic gradient boosted distributed decision trees , 2009, CIKM.

[17]  Xindong Wu,et al.  MReC4.5: C4.5 Ensemble Classification with MapReduce , 2009, 2009 Fourth ChinaGrid Annual Conference.

[18]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[19]  Anand Rajaraman,et al.  Mining of Massive Datasets , 2011 .

[20]  Michael J. A. Berry,et al.  Data mining techniques - for marketing, sales, and customer support , 1997, Wiley computer publishing.

[21]  Vasile PURDIL,et al.  MR-Tree-A Scalable MapReduce Algorithm for Building Decision Trees , 2014 .

[22]  Ian Witten,et al.  Data Mining , 2000 .

[23]  Jeff Cooper,et al.  Improved algorithms for distributed boosting , 2017, 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[24]  J. Ross Quinlan,et al.  Simplifying decision trees , 1987, Int. J. Hum. Comput. Stud..

[25]  GhemawatSanjay,et al.  The Google file system , 2003 .

[26]  Nitesh V. Chawla,et al.  A parallel decision tree builder for mining very large visualization datasets , 2000, SMC.