A Learning Algorithm based on High School Teaching Wisdom

A learning algorithm based on primary school teaching and learning is presented. The methodology is to continuously evaluate a student and to give them training on the examples for which they repeatedly fail, until, they can correctly answer all types of questions. This incremental learning procedure produces better learning curves by demanding the student to optimally dedicate their learning time on the failed examples. When used in machine learning, the algorithm is found to train a machine on a data with maximum variance in the feature space so that the generalization ability of the network improves. The algorithm has interesting applications in data mining, model evaluations and rare objects discovery.

[1]  M. Wann,et al.  The influence of training sets on generalization in feed-forward neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[2]  Masashi Sugiyama,et al.  Training Data Selection for Optimal Generalization with Noise Variance Reduction in , 1998 .

[3]  Ninan Sajeeth Philip,et al.  Boosting the differences: A fast Bayesian classifier neural network , 2000, Intell. Data Anal..

[4]  Jayant Murthy,et al.  A three‐dimensional automated classification scheme for the TAUVEX data pipeline , 2007 .

[5]  M. H. Choueiki,et al.  Training data development with the D-optimality criterion , 1999, IEEE Trans. Neural Networks.

[6]  Kenji Nakayama,et al.  A training method with small computation for classification , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[7]  Leon N. Cooper,et al.  Training Data Selection for Support Vector Machines , 2005, ICNC.

[8]  A. Dawid Conditional Independence in Statistical Theory , 1979 .

[9]  Dennis L. Wilson,et al.  Asymptotic Properties of Nearest Neighbor Rules Using Edited Data , 1972, IEEE Trans. Syst. Man Cybern..

[10]  Huan Liu,et al.  Neural-network feature selector , 1997, IEEE Trans. Neural Networks.

[11]  Carlos Eduardo Pedreira,et al.  Learning vector quantization with training data selection , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Masashi Sugiyama,et al.  Training Data Selection for Optimal Generalization in Trigonometric Polynomial Networks , 1999, NIPS.

[13]  Francisco Herrera,et al.  On the combination of evolutionary algorithms and stratified strategies for training set selection in data mining , 2006, Appl. Soft Comput..

[14]  S. C. Odewahn,et al.  Automated Galaxy Morphology: A Fourier Approach , 2002 .

[15]  Ajit Kembhavi,et al.  A difference boosting neural network for automated star-galaxy classification , 2002 .

[16]  Slava M. Katz,et al.  Estimation of probabilities from sparse data for the language model component of a speech recognizer , 1987, IEEE Trans. Acoust. Speech Signal Process..

[17]  Ninan Sajeeth Philip,et al.  Advances in Automated Algorithms For Morphological Classification of Galaxies Based on Shape Features , 2004 .

[18]  Mark A. Fanty,et al.  Improved probability estimation with neural network models , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[19]  Benjamin Quost One-against-all Classifier Combination in the Framework of Belief Functions , 2006 .

[20]  Christoph F. Eick,et al.  Supervised clustering - algorithms and benefits , 2004, 16th IEEE International Conference on Tools with Artificial Intelligence.