MaxSet: An Algorithm for Finding a Good Approximation for the Largest Linearly Separable Set

Finding the largest linearly separable set of examples for a given Boolean function is a NP-hard problem, that is relevant to neural network learning algorithms and to several problems that can be formulated as the minimization of a set of inequalities. We propose in this work a new algorithm that is based on finding a unate subset of the input examples, with which then train a perceptron to find an approximation for the largest linearly separable subset. The results from the new algorithm are compared to those obtained by the application of the Pocket learning algorithm directly with the whole set of inputs, and show a clear improvement in the size of the linearly separable subset obtained, using a large set of benchmark functions.

[1]  D. Elizondo Searching for linearly separable subsets using the class of linear separability method , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[2]  Tao Xiong,et al.  A combined SVM and LDA approach for classification , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[3]  Leslie E. Trotter,et al.  On the maximum feasible subsystem problem, IISs and IIS-hypergraphs , 2003, Math. Program..

[4]  Franco P. Preparata,et al.  The Densest Hemisphere Problem , 1978, Theor. Comput. Sci..

[5]  M. Golea,et al.  An Approximation Algorithm to Find the Largest Linearly Separable Subset of Training Examples , 1993 .

[6]  Marcus R. Frean,et al.  A "Thermal" Perceptron Learning Rule , 1992, Neural Computation.

[7]  Stephen I. Gallant,et al.  Perceptron-based learning algorithms , 1990, IEEE Trans. Neural Networks.

[8]  R. Greer Trees and Hills: Methodology for Maximizing Functions of Systems of Linear Relations , 1984 .

[9]  Alan Mishchenko,et al.  Unate Decomposition of Boolean Functions , 2001 .

[10]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .