Learning data structures with inherent complex logic: neurocognitive perspective

Computational systems are still far behind biological systems in object recognition, reasoning or analysis of language structures. What kind of data structures can be learned from data with existing machine learning algorithms? Neurocognitive inspirations show why existing learning systems cannot compete with biological ones. They point the way to more efficient algorithms, generating simplest reliable models of data and capable of object recognition with undetermined number of features. The goal of learning in neural networks and other systems is to transform data into linearly separable data clusters. This is sufficient for relatively simple problems, but makes learning almost impossible if the logic inherent in data is complex. New non-separable targets for learning are introduced to simplify learning and to characterize non-separable problems into classes of growing complexity. Neurobiological and formal justification for new learning targets are given and the case of Boolean functions analyzed.

[1]  Anthony Widjaja,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.

[2]  J. M. Minor Parity with two layer feedforward nets , 1993, Neural Networks.

[3]  François Chapeau-Blondeau,et al.  Constructive role of noise in signal detection from parallel arrays of quantizers , 2005, Signal Process..

[4]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[5]  Karolos M. Grigoriadis,et al.  Proceedings of the 4th WSEAS international conference on Computational intelligence, man-machine systems and cybernetics , 2005 .

[6]  Norbert Jankowski,et al.  Versatile and Efficient Meta-Learning Architecture: Knowledge Representation and Management in Computational Intelligence , 2007, 2007 IEEE Symposium on Computational Intelligence and Data Mining.

[7]  R. Nosofsky,et al.  Comparing modes of rule-based classification learning: A replication and extension of Shepard, Hovland, and Jenkins (1961) , 1994, Memory & cognition.

[8]  Geerd H. F. Diercksen,et al.  Classification, Association and Pattern Completion using Neural Similarity Based Methods , 2000 .

[9]  Bogdan M. Wilamowski,et al.  Solving parity-N problems with feedforward neural networks , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[10]  Eugene Lavretsky On the exact solution of the Parity-N problem using ordered neural networks , 2000, Neural Networks.

[11]  Ian Witten,et al.  Data Mining , 2000 .

[12]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[13]  Włodzisław Duch,et al.  Similarity-based methods: a general framework for classification, approximation and association , 2000 .

[14]  Thomas G. Dietterich,et al.  Solving Multiclass Learning Problems via Error-Correcting Output Codes , 1994, J. Artif. Intell. Res..

[15]  R. O’Reilly,et al.  Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain , 2000 .

[16]  Wlodzislaw Duch,et al.  Taxonomy of neural transfer functions , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[17]  Stanley H. Smith,et al.  N-bit parity neural networks: new solutions based on linear programming , 2002, Neurocomputing.

[18]  Wlodzislaw Duch,et al.  Towards Understanding of Natural Language: Neurocognitive Inspirations , 2007, ICANN.

[19]  Juan-Manuel Torres-Moreno,et al.  The Minimum Number of Errors in the N-Parity and its Solution with an Incremental Neural Network , 2002, Neural Processing Letters.

[20]  Kaoru Hirota,et al.  A Solution for the N-bit Parity Problem Using a Single Translated Multiplicative Neuron , 2004, Neural Processing Letters.

[21]  Wlodzislaw Duch,et al.  Uncertainty of data, fuzzy membership functions, and multilayer perceptrons , 2005, IEEE Transactions on Neural Networks.

[22]  Włodzisław Duch K-Separability , 2006, ICANN.

[23]  M. Z. Arslanov,et al.  N-bit parity ordered neural networks , 2002, Neurocomputing.

[24]  Lawrence D. Jackel,et al.  Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.

[25]  Wlodzislaw Duch,et al.  A new methodology of extraction, optimization and application of crisp and fuzzy logical rules , 2001, IEEE Trans. Neural Networks.

[26]  Norbert Jankowski,et al.  Survey of Neural Transfer Functions , 1999 .

[27]  Rudy Setiono,et al.  On the solution of the parity problem by a single hidden layer feedforward neural network , 1997, Neurocomputing.

[28]  Bernhard Schölkopf,et al.  Learning with kernels , 2001 .

[29]  Yoshua Bengio,et al.  Nonlocal Estimation of Manifold Structure , 2006, Neural Computation.

[30]  Wlodzislaw Duch,et al.  What Is Computational Intelligence and Where Is It Going? , 2007, Challenges for Computational Intelligence.

[31]  David G. Stork,et al.  How to solve the N-bit parity problem with two hidden units , 1992, Neural Networks.

[32]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[33]  W Lodzis Law Duch Visualization of Hidden Node Activity in Neural Networks : I . Visualization Methods , 2004 .

[34]  Yoshua Bengio,et al.  The Curse of Dimensionality for Local Kernel Machines , 2005 .

[35]  Robert P. W. Duin,et al.  The Dissimilarity Representation for Pattern Recognition - Foundations and Applications , 2005, Series in Machine Perception and Artificial Intelligence.

[36]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[37]  Wlodzislaw Duch Coloring black boxes: visualization of neural network decisions , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[38]  R. O’Reilly,et al.  Computational Explorations in Cognitive Neuroscience , 2009 .