Subjectivity Learning Theory towards Artificial General Intelligence

The construction of artificial general intelligence (AGI) was a long-term goal of AI research aiming to deal with the complex data in the real world and make reasonable judgments in various cases like a human. However, the current AI creations, referred to as "Narrow AI", are limited to a specific problem. The constraints come from two basic assumptions of data, which are independent and identical distributed samples and single-valued mapping between inputs and outputs. We completely break these constraints and develop the subjectivity learning theory for general intelligence. We assign the mathematical meaning for the philosophical concept of subjectivity and build the data representation of general intelligence. Under the subjectivity representation, then the global risk is constructed as the new learning goal. We prove that subjectivity learning holds a lower risk bound than traditional machine learning. Moreover, we propose the principle of empirical global risk minimization (EGRM) as the subjectivity learning process in practice, establish the condition of consistency, and present triple variables for controlling the total risk bound. The subjectivity learning is a novel learning theory for unconstrained real data and provides a path to develop AGI.

[1]  A. Allen Power, Subjectivity, and Agency: Between Arendt and Foucault , 2002 .

[2]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[3]  R. Kurzweil,et al.  The Singularity Is Near: When Humans Transcend Biology , 2006 .

[4]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[5]  Bin Yu RATES OF CONVERGENCE FOR EMPIRICAL PROCESSES OF STATIONARY MIXING SEQUENCES , 1994 .

[6]  John E. Laird,et al.  Cognitive Architecture Requirements for Achieving AGI , 2010, AGI 2010.

[7]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[8]  Surya Ganguli,et al.  Continual Learning Through Synaptic Intelligence , 2017, ICML.

[9]  Yee Whye Teh,et al.  Progress & Compress: A scalable framework for continual learning , 2018, ICML.

[10]  Danna Zhou,et al.  d. , 1934, Microbial pathogenesis.

[11]  Li Zhang,et al.  Learning to Learn: Meta-Critic Networks for Sample Efficient Learning , 2017, ArXiv.

[12]  Daan Wierstra,et al.  One-shot Learning with Memory-Augmented Neural Networks , 2016, ArXiv.

[13]  Santosh S. Vempala,et al.  Efficient Representations for Lifelong Learning and Autoencoding , 2014, COLT.

[14]  Yarin Gal,et al.  A Unifying Bayesian View of Continual Learning , 2019, ArXiv.

[15]  Massimiliano Pontil,et al.  Regularized multi--task learning , 2004, KDD.

[16]  Yee Whye Teh,et al.  Conditional Neural Processes , 2018, ICML.

[17]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[18]  Joshua B. Tenenbaum,et al.  Meta-Learning for Semi-Supervised Few-Shot Classification , 2018, ICLR.

[19]  Yee Whye Teh,et al.  Task Agnostic Continual Learning via Meta Learning , 2019, ArXiv.

[20]  Roberto Cipolla,et al.  Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[21]  Yoshua Bengio,et al.  Gradient based sample selection for online continual learning , 2019, NeurIPS.

[22]  Jian Sun,et al.  Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[23]  Yoshua Bengio,et al.  Online continual learning with no task boundaries , 2019, ArXiv.