1972 Proposal to Harvard for Backpropagation and Intelligent Reinforcement System

The great new revolution since 2010 in deep learning and machine learning based on neural networks is massively changing the world, and is a subject of great ruminations by high-level decision makers. (For example, see http://www.intelligence.senate.gov/hearings/open-hearing-worldwide-threats-hearing-1.) But the key design principles, such as backpropagation and reinforcement learning based on approximate dynamic programming (as in Alpha Go) were known, and were rejected as heresy, long ago. (See the recent book, https://www.amazon.com/Artificial-Intelligence-Neural-Networks-Computing-ebook/dp/B07K55YZRK, by the President of the International Neural Network Society, for an overview.) This paper, written in 1972 (and scanned into pdf in 2015), was the first explicit proposal for how to build a general reinforcement learning system, based on backpropagation and dynamic programming implemented through model neural networks, capable of converging to an optimal strategy of action in “any” environment informed by an understanding/model which it learns of how the environment works. Modern work uses more sophisticated language, but this effort to explain the underlying ideas in simpler language may still be of value to many. The 1974 thesis itself has been reprinted by Wiley, and has more than 4000 citations listed in scholar.google.com