Computational intelligence from AI to BI to NI

This paper gives highlights of the history of the neural network field, stressing the fundamental ideas which have been in play. Early neural network research was motivated mainly by the goals of artificial intelligence (AI) and of functional neuroscience (biological intelligence, BI), but the field almost died due to frustrations articulated in the famous book Perceptrons by Minsky and Papert. When I found a way to overcome the difficulties by 1974, the community mindset was very resistant to change; it was not until 1987/1988 that the field was reborn in a spectacular way, leading to the organized communities now in place. Even then, it took many more years to establish crossdisciplinary research in the types of mathematical neural networks needed to really understand the kind of intelligence we see in the brain, and to address the most demanding engineering applications. Only through a new (albeit short-lived) funding initiative, funding crossdisciplinary teams of systems engineers and neuroscientists, were we able to fund the critical empirical demonstrations which put our old basic principle of “deep learning” firmly on the map in computer science. Progress has rightly been inhibited at times by legitimate concerns about the “Terminator threat” and other possible abuses of technology. This year, at SPIE, in the quantum computing track, we outline the next stage ahead of us in breaking out of the box, again and again, and rising to fundamental challenges and opportunities still ahead of us.

[1]  Paul J. Werbos,et al.  Neural networks and the experience and cultivation of mind , 2012, Neural Networks.

[2]  Jim Esch,et al.  A Self-Learning Evolutionary Chess Program , 2004, Proc. IEEE.

[3]  E. Feigenbaum,et al.  Computers and Thought , 1963 .

[4]  Robert Kozma,et al.  Advances in Neuromorphic Memristor Science and Applications , 2012, Springer Series in Cognitive and Neural Systems.

[5]  Paul J. Werbos,et al.  2009 Special Issue: Intelligence in the brain: A theory of how it works and how to build it , 2009 .

[6]  P. Werbos Backwards Differentiation in AD and Neural Nets: Past Links and New Opportunities , 2006 .

[7]  Paul J. Werbos,et al.  Building and Understanding Adaptive Systems: A Statistical/Numerical Approach to Factory Automation and Brain Research , 1987, IEEE Transactions on Systems, Man, and Cybernetics.

[8]  Paul J. Werbos,et al.  The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting , 1994 .

[9]  James S. Albus,et al.  Outline for a theory of intelligence , 1991, IEEE Trans. Syst. Man Cybern..

[10]  Paul J. Werbos,et al.  Computational Intelligence for the Smart Grid-History, Challenges, and Opportunities , 2011, IEEE Computational Intelligence Magazine.

[11]  Marvin Minsky,et al.  Perceptrons: An Introduction to Computational Geometry , 1969 .

[12]  Frank L. Lewis,et al.  Reinforcement Learning and Approximate Dynamic Programming for Feedback Control , 2012 .

[13]  Allen Newell,et al.  Human Problem Solving. , 1973 .

[14]  J. Neumann,et al.  Theory of games and economic behavior , 1945, 100 Years of Math Milestones.

[15]  Paul J. Werbos,et al.  From ADP to the brain: Foundations, roadmap, challenges and research priorities , 2014, 2014 International Joint Conference on Neural Networks (IJCNN).

[16]  Paul J. Werbos,et al.  Applications of advances in nonlinear sensitivity analysis , 1982 .

[17]  Guanrong Chen,et al.  Chaos, CNN, Memristors and Beyond: A Festschrift for Leon Chua , 2013 .

[18]  D. O. Hebb,et al.  The organization of behavior , 1988 .