Editorial Exploratory research in machine learning

Exploratory research contributes to the continued vitality of every discipline. The aim of exploratory research is to identify new tasks--tasks that cannot be solved by existing methods. Once a new task has been found, exploratory research seeks to develop a precise definition of the task and to understand the factors that make the task different from previously-solved tasks. Until recently, most research in machine learning was primarily exploratory. However, during the past decade, some areas of the field--particularly inductive learning--have matured to the point that careful, quantitative experiments are now possible and proved theoretical results have been obtained. Although these trends are extremely healthy and long overdue, there is a danger that the increased attention to these products of mature research may discourage researchers from undertaking and publishing research of a more exploratory nature. The goal of this editorial is to emphasize the importance of exploratory research and to encourage the publication of high quality exploratory results in Machine Learning.

[1]  Philip J. Stone,et al.  Experiments in induction , 1966 .

[2]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[3]  J. Ross Quinlan,et al.  An Empirical Comparison of Genetic and Decision-Tree Classifiers , 1988, ML.

[4]  HausslerDavid,et al.  A general lower bound on the number of examples needed for learning , 1989 .

[5]  Leonard Pitt,et al.  The minimum consistent DFA problem cannot be approximated within and polynomial , 1989, STOC '89.

[6]  Douglas H. Fisher,et al.  Knowledge Acquisition Via Incremental Conceptual Clustering , 1987, Machine Learning.

[7]  J. Rissanen,et al.  Modeling By Shortest Data Description* , 1978, Autom..

[8]  John Mingers,et al.  An Empirical Comparison of Selection Measures for Decision-Tree Induction , 1989, Machine Learning.

[9]  J. Ross Quinlan,et al.  Learning Efficient Classification Procedures and Their Application to Chess End Games , 1983 .

[10]  Larry A. Rendell,et al.  A New Basis for State-Space Learning Systems and a Successful Implementation , 1983, Artif. Intell..

[11]  Tom Michael Mitchell Version spaces: an approach to concept learning. , 1979 .

[12]  David Haussler,et al.  Occam's Razor , 1987, Inf. Process. Lett..

[13]  Douglas H. Fisher,et al.  An Empirical Comparison of ID3 and Back-propagation , 1989, IJCAI.

[14]  Douglas H. Fisher,et al.  A Case Study of Incremental Concept Induction , 1986, AAAI.

[15]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[16]  Tom M. Mitchell,et al.  Learning by experimentation: acquiring and refining problem-solving heuristics , 1993 .

[17]  JefI’rty C. Schlirrlrrer Beyond incremental processing : Tracking concept drift , 1999 .

[18]  Raymond J. Mooney,et al.  An Experimental Comparison of Symbolic and Connectionist Learning Algorithms , 1989, IJCAI.

[19]  J. C. Schlimmer,et al.  Incremental learning from noisy data , 2004, Machine Learning.

[20]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[21]  Sholom M. Weiss,et al.  An Empirical Comparison of Pattern Recognition, Neural Nets, and Machine Learning Classification Methods , 1989, IJCAI.

[22]  Paul E. Utgoff,et al.  ID5: An Incremental ID3 , 1987, ML Workshop.

[23]  Leslie G. Valiant,et al.  A general lower bound on the number of examples needed for learning , 1988, COLT '88.

[24]  Ryszard S. Michalski,et al.  On the Quasi-Minimal Solution of the General Covering Problem , 1969 .

[25]  Tom M. Mitchell,et al.  LEAP: A Learning Apprentice for VLSI Design , 1985, IJCAI.

[26]  Temple F. Smith Occam's razor , 1980, Nature.

[27]  John R. Anderson,et al.  MACHINE LEARNING An Artificial Intelligence Approach , 2009 .