Evolving Culture Versus Local Minima

We propose a theory that relates difficulty of learning in deep architectures to culture and language. It is articulated around the following hypotheses: (1) learning in an individual human brain is hampered by the presence of effective local minima; (2) this optimization difficulty is particularly important when it comes to learning higher-level abstractions, i.e., concepts that cover a vast and highly-nonlinear span of sensory configurations; (3) such high-level abstractions are best represented in brains by the composition of many levels of representation, i.e., by deep architectures; (4) a human brain can learn such high-level abstractions if guided by the signals produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and (5), language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world. These hypotheses put together imply that human culture and the evolution of ideas have been crucial to counter an optimization difficulty: this optimization difficulty would otherwise make it very difficult for human brains to capture high-level knowledge of the world. The theory is grounded in experimental observations of the difficulties of training deep artificial neural networks. Plausible consequences of this theory for the efficiency of cultural evolution are sketched.

[1]  Hossein Mobahi,et al.  Deep Learning via Semi-supervised Embedding , 2012, Neural Networks: Tricks of the Trade.

[2]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[3]  Alan L. Yuille,et al.  The Convergence of Contrastive Divergences , 2004, NIPS.

[4]  Yoshua Bengio,et al.  Exploring Strategies for Training Deep Neural Networks , 2009, J. Mach. Learn. Res..

[5]  Yoshua Bengio,et al.  DECISION TREES DO NOT GENERALIZE TO NEW VARIATIONS , 2010, Comput. Intell..

[6]  Yoshua Bengio,et al.  On the Expressive Power of Deep Architectures , 2011, ALT.

[7]  Geoffrey E. Hinton,et al.  How Learning Can Guide Evolution , 1996, Complex Syst..

[8]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[9]  Brian Hazlehurst,et al.  How to invent a lexicon: the development of shared symbols in interaction , 2006 .

[10]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[11]  Peter Glöckner,et al.  Why Does Unsupervised Pre-training Help Deep Learning? , 2013 .

[12]  P. Dayan,et al.  Flexible shaping: How learning in small steps helps , 2009, Cognition.

[13]  Jason Weston,et al.  Curriculum learning , 2009, ICML '09.

[14]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[15]  R. Lewontin ‘The Selfish Gene’ , 1977, Nature.

[16]  Johan Håstad,et al.  On the power of small-depth threshold circuits , 1990, Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science.

[17]  N. Gilbert,et al.  Artificial Societies: The Computer Simulation of Social Life , 1995 .

[18]  Yann LeCun,et al.  What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[19]  Michael A. Arbib,et al.  The handbook of brain theory and neural networks , 1995, A Bradford book.

[20]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[21]  Hugo Larochelle,et al.  Efficient Learning of Deep Boltzmann Machines , 2010, AISTATS.

[22]  Yoshua Bengio,et al.  Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.

[23]  Yoshua Bengio,et al.  Scaling learning algorithms towards AI , 2007 .

[24]  Marc'Aurelio Ranzato,et al.  Efficient Learning of Sparse Representations with an Energy-Based Model , 2006, NIPS.

[25]  H. Terrace,et al.  Cognitive Imitation in Rhesus Macaques , 2004, Science.

[26]  P. Stewart The selfish meme , 2006 .

[27]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[28]  James Martens,et al.  Deep learning via Hessian-free optimization , 2010, ICML.

[29]  J. Elman Learning and development in neural networks: the importance of starting small , 1993, Cognition.

[30]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[31]  Geoffrey E. Hinton Connectionist Learning Procedures , 1989, Artif. Intell..

[32]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[33]  Gunnar Rätsch,et al.  Advanced Lectures on Machine Learning , 2004, Lecture Notes in Computer Science.

[34]  Johan Håstad,et al.  Almost optimal lower bounds for small depth circuits , 1986, STOC '86.

[35]  H. Kunkel GENERAL INTRODUCTION , 1971, The Journal of experimental medicine.

[36]  Elan Moritz Memetic Science: I-General Introduction , 1990 .

[37]  Bilge Mutlu,et al.  How Do Humans Teach: On Curriculum Learning and Teaching Dimension , 2011, NIPS.

[38]  G. Peterson A day of great illumination: B. F. Skinner's discovery of shaping. , 2004, Journal of the experimental analysis of behavior.

[39]  Honglak Lee,et al.  Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , 2009, ICML '09.

[40]  Angelo Cangelosi,et al.  Simulating the Evolution of Language , 2002, Springer London.

[41]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[42]  Taichi Kiwaki,et al.  Deep Boltzmann Machines with Fine Scalability , 2015, ArXiv.

[43]  Brian Hazlehurst,et al.  Auto-organizaiton and emergence of shared language structure , 2002 .

[44]  Geoffrey E. Hinton Learning distributed representations of concepts. , 1989 .

[45]  Miguel Á. Carreira-Perpiñán,et al.  On Contrastive Divergence Learning , 2005, AISTATS.

[46]  Andrew Chi-Chih Yao,et al.  Separating the Polynomial-Time Hierarchy by Oracles (Preliminary Version) , 1985, FOCS.

[47]  Rajat Raina,et al.  Self-taught learning: transfer learning from unlabeled data , 2007, ICML '07.