Advancements in the Field

This chapter is divided into three sections, i.e., machine learning, neuroscience, and technology. This distribution corresponds to the main driving factors of the new AI revolution, meaning algorithms and data, knowledge of the brain structure, and greater computational power. The goal of the chapter is to give an overview of the state of art of these three blocks in order to understand what AI is going toward.

[1]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[2]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[3]  Pieter Abbeel,et al.  InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.

[4]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[5]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[6]  James Kennedy,et al.  Particle swarm optimization , 2002, Proceedings of ICNN'95 - International Conference on Neural Networks.

[7]  John R. Koza,et al.  Genetic programming - on the programming of computers by means of natural selection , 1993, Complex adaptive systems.

[8]  D. Savić,et al.  A symbolic data-driven technique based on evolutionary polynomial regression , 2006 .

[9]  E. Candès,et al.  Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.

[10]  Geoffrey E. Hinton,et al.  Unsupervised learning : foundations of neural computation , 1999 .

[11]  Junfeng Yang,et al.  Towards Making Systems Forget with Machine Unlearning , 2015, 2015 IEEE Symposium on Security and Privacy.

[12]  W. Arthur Inductive Reasoning and Bounded Rationality , 1994 .

[13]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[14]  Pascal Vincent,et al.  Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives , 2012, ArXiv.

[15]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[16]  H. B. Barlow,et al.  Finding Minimum Entropy Codes , 1989, Neural Computation.

[17]  R. Hersh Novelty Wins, “Straight Toward Objective” Loses! or Book Review: Why Greatness Cannot Be Planned: The Myth of the Objective, by Kenneth O. Stanley and Joel Lehman , 2015 .

[18]  David H. Wolpert,et al.  No free lunch theorems for optimization , 1997, IEEE Trans. Evol. Comput..

[19]  Claude E. Shannon,et al.  A mathematical theory of communication , 1948, MOCO.

[20]  Subutai Ahmad,et al.  Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex , 2015, Front. Neural Circuits.

[21]  Subutai Ahmad,et al.  Properties of Sparse Distributed Representations and their Application to Hierarchical Temporal Memory , 2015, ArXiv.