Augmented Artificial Intelligence: a Conceptual Framework

All artificial Intelligence (AI) systems make errors. These errors are unexpected, and differ often from the typical human mistakes ("non-human" errors). The AI errors should be corrected without damage of existing skills and, hopefully, avoiding direct human expertise. This paper presents an initial summary report of project taking new and systematic approach to improving the intellectual effectiveness of the individual AI by communities of AIs. We combine some ideas of learning in heterogeneous multiagent systems with new and original mathematical approaches for non-iterative corrections of errors of legacy AI systems. The mathematical foundations of AI non-destructive correction are presented and a series of new stochastic separation theorems is proven. These theorems provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. They demonstrate that in high dimensions and even for exponentially large samples, linear classifiers in their classical Fisher's form are powerful enough to separate errors from correct responses with high probability and to provide efficient solution to the non-destructive corrector problem. In particular, we prove some hypotheses formulated in our paper `Stochastic Separation Theorems' (Neural Networks, 94, 255--259, 2017), and answer one general problem published by Donoho and Tanner in 2009.

[1]  S. Bobkov,et al.  From Brunn-Minkowski to Brascamp-Lieb and to logarithmic Sobolev inequalities , 2000 .

[2]  G. V. Matyushin,et al.  " MULTINEURON " NEURAL SIMULATOR AND ITS MEDICAL APPLICATIONS , 1994 .

[3]  Konstantin I. Sofeikov,et al.  Knowledge Transfer Between Artificial Intelligence Systems , 2017, Front. Neurorobot..

[4]  O. Guédon,et al.  Interpolating Thin-Shell and Sharp Large-Deviation Estimates for Lsotropic Log-Concave Measures , 2010, 1011.0943.

[5]  Věra Kůrková,et al.  Probabilistic lower bounds for approximation by shallow perceptron networks , 2017, Neural Networks.

[6]  L. Berwald,et al.  Verallgemeinerung eines Mittelwertsatzes von J. Favard für Positive Konkave Funktionen , 1947 .

[7]  Douglas C. Engelbart,et al.  Augmenting human intellect: a conceptual framework , 1962 .

[8]  David L. Donoho,et al.  Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing , 2009, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[9]  G. V. Matyushin,et al.  Medical, psychological and physiological applications of MultiNeuron neural simulator , 1995, The Second International Symposium on Neuroinformatics and Neurocomputers.

[10]  Paul C. Kainen,et al.  Quasiorthogonal dimension of euclidean spaces , 1993 .

[11]  Zoltán Füredi,et al.  Approximation of the sphere by polytopes having few vertices , 1988 .

[12]  Geoffrey E. Hinton,et al.  Adaptive Mixtures of Local Experts , 1991, Neural Computation.

[13]  Dianhui Wang,et al.  Stochastic Configuration Networks: Fundamentals and Algorithms , 2017, IEEE Transactions on Cybernetics.

[14]  S. Vempala,et al.  The geometry of logconcave functions and sampling algorithms , 2007 .

[15]  Mikhail Belkin,et al.  The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures , 2013, COLT.

[16]  Alexander N. Gorban,et al.  Internal conflicts in neural networks , 1992, [Proceedings] 1992 RNNS/IEEE Symposium on Neuroinformatics and Neurocomputers.

[17]  Zoltán Füredi,et al.  On the shape of the convex hull of random points , 1988 .

[18]  M. Talagrand Concentration of measure and isoperimetric inequalities in product spaces , 1994, math/9406212.

[19]  Bo'az Klartag,et al.  Inner Regularization of Log-Concave Measures and Small-Ball Estimates , 2012 .

[20]  Ivan Tyukin,et al.  One-trial correction of legacy AI systems and stochastic separation theorems , 2019, Inf. Sci..

[21]  S. Bobkov Isoperimetric and Analytic Inequalities for Log-Concave Probability Measures , 1999 .

[22]  Ivan Tyukin,et al.  Approximation with random bases: Pro et Contra , 2015, Inf. Sci..

[23]  M. Ledoux The concentration of measure phenomenon , 2001 .

[24]  Alexander N. Gorban,et al.  Generation of explicit knowledge from empirical data through pruning of trainable neural networks , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[25]  Ivan Tyukin,et al.  The Blessing of Dimensionality: Separation Theorems in the Thermodynamic Limit , 2016, ArXiv.

[26]  Ivan Tyukin,et al.  Blessing of dimensionality: mathematical foundations of the statistical physics of data , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[27]  Petros Valettas,et al.  On the Geometry of Log-Concave Probability Measures with Bounded Log-Sobolev Constant , 2013 .

[28]  Dianhui Wang,et al.  Randomness in neural networks: an overview , 2017, WIREs Data Mining Knowl. Discov..

[29]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[30]  Julia Makarova,et al.  High-Dimensional Brain: A Tool for Encoding and Rapid Learning of Memories by Single Neurons , 2017, Bulletin of Mathematical Biology.

[31]  Silouanos Brazitikos Geometry of Isotropic Convex Bodies , 2014 .

[32]  Ivan Tyukin,et al.  Stochastic Separation Theorems , 2017, Neural Networks.

[33]  Gregory Piatetsky-Shapiro,et al.  High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality , 2000 .

[34]  G. Paouris Small ball probability estimates for log-concave measures , 2012 .