Bidirectional Neural Networks Reduce Generalization Error

BiDirectional Neural Networks (BDNN) are based on Multi Layer Perceptrons trained by the error back-propagation algorithm. They can be used as both associative memories and to find the centres of clusters.

[1]  Bagrat R. Amirikian,et al.  What size network is good for generalization of a specific task of interest? , 1994, Neural Networks.

[2]  Gerald Tesauro,et al.  How Tight Are the Vapnik-Chervonenkis Bounds? , 1992, Neural Computation.

[3]  James A. Anderson,et al.  Cognitive and psychological computation with neural models , 1983, IEEE Transactions on Systems, Man, and Cybernetics.

[4]  Wolfgang Maass,et al.  Neural Nets with Superlinear VC-Dimension , 1994, Neural Computation.

[5]  David Haussler,et al.  Predicting (0, 1)-functions on randomly drawn points , 1988, [Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science.

[6]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[7]  Tom Gedeon,et al.  Decrypting Neural Network Data: A Gis Case Study , 1995, ICANNGA.

[8]  BART KOSKO,et al.  Bidirectional associative memories , 1988, IEEE Trans. Syst. Man Cybern..

[9]  Tamás D. Gedeon,et al.  Bimodal Distribution Removal , 1993, IWANN.

[10]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[11]  Leslie G. Valiant,et al.  A general lower bound on the number of examples needed for learning , 1988, COLT '88.

[12]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[13]  Tom Gedeon,et al.  Explaining student grades predicted by a neural network , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[14]  Elie Bienenstock,et al.  Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.

[15]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.