Establish the Three Theorems : DP Optimally Self-Programs Logics Directly from Physics

In artificial intelligence (AI) there are two major schools, symbolic and connectionist. The Developmental Program (DP) self-programs logic into a Developmental Network (DN) directly from physics or data. Weng 2011 [6] proposed three theorems about the DN which bridged the two schools: (1) From any complex FA that demonstrates human knowledge through its sequence of the symbolic inputs-outputs, the DP incrementally develops a corresponding DN through the image codes of the symbolic inputs-outputs of the FA. The DN learning from the FA is incremental, immediate and error-free. (2) After learning the FA, if the DN freezes its learning but runs, it generalizes optimally for infinitely many image inputs and actions based on the embedded inner-product distance, state equivalence, and the principle of maximum likelihood. (3) After learning the FA, if the DN continues to learn and run, it “thinks” optimally in the sense of maximum likelihood based on its past experience. This paper presents the proof of these three theorems.