Locality of Global Stochastic Interaction in Directed Acyclic Networks

The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the periphery of the network. Our main intrinsic constraint is given by a directed acyclic network structure. First mathematical results about the strong relation of the local information flow and the global interaction are stated in order to investigate the possibility of controlling IMI optimization in a completely local way. Furthermore, we discuss some relations of this approach to the optimization according to Linsker's Infomax principle.

[1]  G. Chaitin,et al.  TOWARD A MATHEMATICAL DEFINITION OF “ LIFE ” , 1979 .

[2]  N. Ay AN INFORMATION-GEOMETRIC APPROACH TO A THEORY OF PRAGMATIC STRUCTURING , 2002 .

[3]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[4]  Gregory J. Chaitin,et al.  Exploring RANDOMNESS , 2001, Discrete Mathematics and Theoretical Computer Science.

[5]  Shun-ichi Amari,et al.  Methods of information geometry , 2000 .

[6]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[7]  William Bialek,et al.  Spikes: Exploring the Neural Code , 1996 .

[8]  R. Linsker,et al.  From basic network principles to neural architecture , 1986 .

[9]  Thomas Wennekers,et al.  Spatial and Temporal Stochastic Interaction in Neuronal Assemblies , 2003 .

[10]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[11]  Shun-ichi Amari,et al.  Differential-geometrical methods in statistics , 1985 .

[12]  Shun-ichi Amari,et al.  Natural Gradient Works Efficiently in Learning , 1998, Neural Computation.

[13]  Thomas Wennekers,et al.  Dynamical properties of strongly interacting Markov chains , 2003, Neural Networks.

[14]  R. A. Leibler,et al.  On Information and Sufficiency , 1951 .

[15]  G. J. Chaltin,et al.  To a mathematical definition of 'life' , 1970, SIGA.

[16]  Ralph Linsker,et al.  A Local Learning Rule That Enables Information Maximization for Arbitrary Input Distributions , 1997, Neural Computation.

[17]  D. Obradovic,et al.  Information Maximization and Independent Component Analysis: Is There a Difference? , 1998, Neural Computation.

[18]  Shun-ichi Amari,et al.  Information geometry on hierarchy of probability distributions , 2001, IEEE Trans. Inf. Theory.

[19]  Ralph Linsker,et al.  Self-organization in a perceptual network , 1988, Computer.

[20]  S. Laughlin A Simple Coding Procedure Enhances a Neuron's Information Capacity , 1981, Zeitschrift fur Naturforschung. Section C, Biosciences.

[21]  D. Hubel,et al.  Receptive fields and functional architecture of monkey striate cortex , 1968, The Journal of physiology.

[22]  Steffen L. Lauritzen,et al.  Graphical models in R , 1996 .

[23]  R Linsker,et al.  From basic network principles to neural architecture: emergence of spatial-opponent cells. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[24]  Ay Nihat,et al.  Information Geometry on Complexity and Stochastic Interaction , 2001 .

[25]  H Barlow,et al.  Redundancy reduction revisited , 2001, Network.

[26]  G. Edelman,et al.  A measure for brain complexity: relating functional segregation and integration in the nervous system. , 1994, Proceedings of the National Academy of Sciences of the United States of America.

[27]  Michael I. Jordan,et al.  Probabilistic Networks and Expert Systems , 1999 .

[28]  F. Attneave Some informational aspects of visual perception. , 1954, Psychological review.

[29]  Terrence J. Sejnowski,et al.  Unsupervised Learning , 2018, Encyclopedia of GIS.