Voting with Random Classifiers (VORACE)

In many machine learning scenarios, looking for the best classifier that fits a particular dataset can be very costly in terms of time and resources. Moreover, it can require deep knowledge of the specific domain. We propose a new technique which does not require profound expertise in the domain and avoids the commonly used strategy of hyper-parameter tuning and model selection. Our method is an innovative ensemble technique that uses voting rules over a set of randomly-generated classifiers. Given a new input sample, we interpret the output of each classifier as a ranking over the set of possible classes. We then aggregate these output rankings using a voting rule, which treats them as preferences over the classes. We show that our approach obtains good results compared to the state-of-the-art, both providing a theoretical analysis and an empirical evaluation of the approach on several datasets.

[1]  Berthold Lausen,et al.  Ensemble of a subset of kNN classifiers , 2018, Adv. Data Anal. Classif..

[2]  Costin Badica,et al.  Evaluating the effect of voting methods on ensemble-based classification , 2017, 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA).

[3]  Zhanyi Hu,et al.  High-Resolution Remote Sensing Data Classification over Urban Areas Using Random Forest Ensemble and Fully Connected Conditional Random Field , 2017, ISPRS Int. J. Geo Inf..

[4]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[5]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[6]  Sotiris B. Kotsiantis,et al.  Machine learning: a review of classification and combining techniques , 2006, Artificial Intelligence Review.

[7]  L. Shapley,et al.  Optimizing group judgmental accuracy in the presence of interdependencies , 1984 .

[8]  Leo Breiman,et al.  Stacked regressions , 2004, Machine Learning.

[9]  Geoffrey I. Webb,et al.  MultiBoosting: A Technique for Combining Boosting and Wagging , 2000, Machine Learning.

[10]  Roman Seidl,et al.  Handbook of Computational Social Choice by Brandt Felix, Vincent Conitzer, Ulle Endriss, Jerome Lang, Ariel Procaccia , 2018, J. Artif. Soc. Soc. Simul..

[11]  Jiri Matas,et al.  On Combining Classifiers , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Nicolas de Condorcet Essai Sur L'Application de L'Analyse a la Probabilite Des Decisions Rendues a la Pluralite Des Voix , 2009 .

[13]  Mohamad H. Hassoun,et al.  Analysis of a Plurality Voting-based Combination of Classifiers , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).

[14]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[15]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..

[16]  Raymond J. Mooney,et al.  Experiments on Ensembles with Missing and Noisy Data , 2004, Multiple Classifier Systems.

[17]  Eric Bauer,et al.  An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.

[18]  Andrew McCallum,et al.  Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.

[19]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[20]  Lambert Schomaker,et al.  Variants of the Borda count method for combining ranked classifier hypotheses , 2000 .

[21]  Taghi M. Khoshgoftaar,et al.  Comparing Boosting and Bagging Techniques With Noisy and Imbalanced Data , 2011, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[22]  Mrinal Pandey,et al.  Hybrid Ensemble of classifiers using voting , 2015, 2015 International Conference on Green Computing and Internet of Things (ICGCIoT).

[23]  Douglas Stott Parker,et al.  Empirical comparisons of various voting methods in bagging , 2003, KDD '03.

[24]  K. Arrow,et al.  Handbook of Social Choice and Welfare , 2011 .

[25]  Lior Rokach,et al.  Ensemble-based classifiers , 2010, Artificial Intelligence Review.

[26]  Muhammad Nadeem Majeed,et al.  Ensemble-classifiers-assisted detection of cerebral microbleeds in brain MRI , 2018, Comput. Electr. Eng..

[27]  Steven J. Simske,et al.  Performance analysis of pattern classifier combination by plurality voting , 2003, Pattern Recognit. Lett..

[28]  Vincent Conitzer,et al.  Handbook of Computational Social Choice , 2016 .

[29]  Shmuel Nitzan,et al.  Optimal Decision Rules in Uncertain Dichotomous Choice Situations , 1982 .

[30]  Anne M. P. Canuto,et al.  An exploratory study of mono and multi-objective metaheuristics to ensemble of classifiers , 2018, Applied Intelligence.

[31]  Robert P. W. Duin,et al.  Limits on the majority vote accuracy in classifier fusion , 2003, Pattern Analysis & Applications.

[32]  Thomas G. Dietterich,et al.  Solving Multiclass Learning Problems via Error-Correcting Output Codes , 1994, J. Artif. Intell. Res..

[33]  Sotiris B. Kotsiantis,et al.  Local voting of weak classifiers , 2005, Int. J. Knowl. Based Intell. Eng. Syst..

[34]  Antonio Moreno,et al.  Learning ensemble classifiers for diabetic retinopathy assessment , 2017, Artif. Intell. Medicine.

[35]  Vincent Conitzer,et al.  Common Voting Rules as Maximum Likelihood Estimators , 2005, UAI.

[36]  Ching Y. Suen,et al.  Application of majority voting to pattern recognition: an analysis of its behavior and performance , 1997, IEEE Trans. Syst. Man Cybern. Part A.

[37]  Toby Walsh,et al.  A Short Introduction to Preferences: Between Artificial Intelligence and Social Choice , 2011, A Short Introduction to Preferences.

[38]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[39]  Mohsen Azadbakht,et al.  Synergy of sampling techniques and ensemble classifiers for classification of urban environments using full-waveform LiDAR data , 2018, Int. J. Appl. Earth Obs. Geoinformation.

[40]  Lawrence G. Sager Handbook of Computational Social Choice , 2015 .

[41]  Toby Walsh,et al.  A Short Introduction to Preferences: Between AI and Social Choice , 2011 .