Connectionist Statistical Inference

The yin and yang of science are synthesis and analysis, which ebb and flow in harmony with the intellectual currents of the ages. In the twentieth century, we have witnessed the power and glory of analysis: matter has been subdivided into molecules, molecules into atoms, atoms into electrons and nuclei, nuclei into hadrons, hadrons into quarks and gluons,.... The onward march to the ultimately small, requiring probes with energies approaching those unleashed in the creation of the universe, has reached a stage where further progress cannot be made without the pooled resources of many nations. In this pragmatic post-postmodern era, the remoteness of the submicroscopic world and its apparent irrelevance to the problems of the “real” world make it doubtful that the vigorous thrust of analysis that has dominated 20th century physics can be maintained. It is to be expected, then, that the tide will turn as the millennium draws to a close and that the 21st century will see a new and sophisticated phase of synthesis. We, as physicists, have broken the world apart into its elemental components, and now we must learn how to put these parts back together into a functioning whole: we must learn how to rebuild the world. This is the theme of the new science of complex systems. The story of this science will be the story of many-body physicspar excellence:of how miraculous phenomena — life, thought, society — can arise out of the relationships and interactions of simple (or simplified) units or agents. We already see hints of the grandeur and excitement of this quest in the remarkable recent advances of dynamical systems theory (fractals, chaos,…) and of statistical physics (spin glasses, theories of learning,…). Quite obviously, it will be necessary to adopt a probabilistic view of nature in much of this work, and much can be gained by applying powerful techniques that already exist in mathematical statistics (but are little appreciated by physicists).

[1]  J. R. Schrieffer Theory of Superconductivity , 1965 .

[2]  Richard Lippmann,et al.  Neural Network Classifiers Estimate Bayesian a posteriori Probabilities , 1991, Neural Computation.

[3]  Solomon Kullback,et al.  Information Theory and Statistics , 1960 .

[4]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[5]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[6]  Berndt Müller,et al.  Neural networks: an introduction , 1990 .

[7]  John W. Clark,et al.  Artificial Neural Networks that Learn Many-Body Physics , 1991 .

[8]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[9]  Carsten Peterson,et al.  Explorations of the mean field theory learning algorithm , 1989, Neural Networks.

[10]  John W. Clark,et al.  A modified backpropagation algorithm for training neural networks on data with error bars , 1995 .

[11]  John W. Clark,et al.  Neural networks that learn to predict probabilities: Global models of nuclear stability and decay , 1995, Neural Networks.

[12]  John W. Clark,et al.  Learning and prediction of nuclear stability by neural networks , 1992 .

[13]  Rajiv K. Kalia,et al.  Condensed Matter Theories , 1988 .

[14]  Monica A. Walker,et al.  Studies in Item Analysis and Prediction. , 1962 .

[15]  John W. Clark,et al.  Neural network models of nuclear systematics , 1993 .

[16]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[17]  R. Palmer,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[18]  J W Clark,et al.  Neural network modelling , 1991, Physics in medicine and biology.

[19]  John W. Clark,et al.  Collective Computation of Many-Body Properties by Neural Networks , 1992 .

[20]  John W. Clark,et al.  Update on the crisis in nuclear-matter theory: A summary of the trieste conference , 1979 .