Cellular Automata Can Reduce Memory Requirements of Collective-State Computing

Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space–time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns—rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.

[1]  Jan M. Rabaey,et al.  A Highly Energy-Efficient Hyperdimensional Computing Processor for Wearable Multi-Modal Classification , 2021, 2021 IEEE Biomedical Circuits and Systems Conference (BioCAS).

[2]  B. Olshausen,et al.  Generalized Learning Vector Quantization for Classification in Randomized Neural Networks and Hyperdimensional Computing , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).

[3]  M. Panella,et al.  Hyperdimensional Computing for Efficient Distributed Classification with Randomized Neural Networks , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).

[4]  Friedrich T. Sommer,et al.  Perceptron Theory for Predicting the Accuracy of Neural Networks , 2020, ArXiv.

[5]  Bruno A. Olshausen,et al.  Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods , 2020, Neural Computation.

[6]  Bruno A. Olshausen,et al.  Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures , 2020, Neural Computation.

[7]  S. Dasgupta,et al.  A Theoretical Perspective on Hyperdimensional Computing , 2020, J. Artif. Intell. Res..

[8]  Friedrich T. Sommer,et al.  Variable Binding for Sparse Distributed Representations: Theory and Applications , 2020, IEEE Transactions on Neural Networks and Learning Systems.

[9]  Bruno A. Olshausen,et al.  Resonator networks for factoring distributed representations of data structures , 2020, ArXiv.

[10]  A. I. Martyshkin,et al.  Search for a substring of characters using the theory of non-deterministic finite automata and vector-character architecture , 2020, Bulletin of Electrical Engineering and Informatics.

[11]  Wolfgang Porod,et al.  Coupled oscillators for computing: A review and perspective , 2020 .

[12]  Christof Teuscher,et al.  Reservoir Computing with Complex Cellular Automata , 2019, Complex Syst..

[13]  Denis Kleyko,et al.  Low-Power Classification using FPGA—An Approach based on Cellular Automata, Neural Networks, and Hyperdimensional Computing , 2019, 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA).

[14]  Evgeny Osipov,et al.  Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[15]  Spencer J. Kent,et al.  Resonator Networks outperform optimization methods at solving high-dimensional vector factorization , 2019 .

[16]  Monica Dascălu,et al.  Cellular Automata and Randomization: A Structural Overview , 2018, From Natural to Artificial Intelligence - Algorithms and Applications.

[17]  Luca Benini,et al.  Hardware Optimizations of Dense Binary Hyperdimensional Computing: Rematerialization of Hypervectors, Binarized Bundling, and Combinational Associative Memory , 2018, ACM J. Emerg. Technol. Comput. Syst..

[18]  Friedrich T. Sommer,et al.  A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks , 2018, Neural Computation.

[19]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[20]  Stefano Nichele,et al.  Deep Learning with Cellular Automaton-Based Reservoir Computing , 2017, Complex Syst..

[21]  Alexander Legalov,et al.  Associative synthesis of finite state automata model of a controlled object with hyperdimensional computing , 2017, IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society.

[22]  Dmitri A. Rachkovskij,et al.  Neural Distributed Autoassociative Memories: A Survey , 2017, ArXiv.

[23]  Evgeny Osipov,et al.  No Two Brains Are Alike: Cloning a Hyperdimensional Associative Memory Using Cellular Automata Computations , 2017, BICA 2017.

[24]  Jan M. Rabaey,et al.  High-Dimensional Computing as a Nanoscalable Paradigm , 2017, IEEE Transactions on Circuits and Systems I: Regular Papers.

[25]  Denis Kleyko,et al.  Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[26]  Evgeny Osipov,et al.  Integer Echo State Networks: Hyperdimensional Reservoir Computing , 2017, ArXiv.

[27]  Evgeny Osipov,et al.  Modality classification of medical images with distributed representations based on cellular automata reservoir computing , 2017, 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017).

[28]  Nathan McDonald,et al.  Reservoir computing & extreme learning machines using pairs of cellular automata rules , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[29]  Stefano Nichele,et al.  Reservoir Computing Using Nonuniform Binary Cellular Automata , 2017, Complex Syst..

[30]  József Fiser,et al.  Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex , 2016, Neuron.

[31]  Okko Johannes Räsänen,et al.  Sequence Prediction With Sparse Distributed Hyperdimensional Coding Applied to the Analysis of Mobile Phone Use Patterns , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Aditya Joshi,et al.  Language Geometry Using Random Indexing , 2016, QI.

[33]  Özgür Yilmaz,et al.  Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing , 2015, Neural Computation.

[34]  Y. Ahmet Sekercioglu,et al.  Holographic Graph Neuron: A Bioinspired Architecture for Pattern Processing , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[35]  Stephen I. Gallant,et al.  Representing Objects, Relations, and Sequences , 2013, Neural Computation.

[36]  Jorge Nuno Silva,et al.  Mathematical Games , 1959, Nature.

[37]  Andreas Knoblauch,et al.  Zip nets: Efficient associative computation with binary synapses , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[38]  Farrokh Marvasti,et al.  Deterministic Construction of Binary, Bipolar, and Ternary Compressed Sensing Matrices , 2009, IEEE Transactions on Information Theory.

[39]  W. Maass,et al.  State-dependent computations: spatiotemporal processing in cortical networks , 2009, Nature Reviews Neuroscience.

[40]  Pentti Kanerva,et al.  Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors , 2009, Cognitive Computation.

[41]  O. Sentieys,et al.  Search for Optimal Five-Neighbor FPGA-Based Cellular Automata Random Number Generators , 2007, 2007 International Symposium on Signals, Systems and Electronics.

[42]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[43]  Ross W. Gayler Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience , 2004, ArXiv.

[44]  Dimitris Achlioptas,et al.  Database-friendly random projections: Johnson-Lindenstrauss with binary coins , 2003, J. Comput. Syst. Sci..

[45]  Tony A. Plate,et al.  Holographic Reduced Representation: Distributed Representation for Cognitive Structures , 2003 .

[46]  John Daugman,et al.  The importance of being random: statistical principles of iris recognition , 2003, Pattern Recognit..

[47]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[48]  Dmitri A. Rachkovskij,et al.  Representation and Processing of Structures with Binary Sparse Distributed Codes , 2001, IEEE Trans. Knowl. Data Eng..

[49]  J. Theriot The importance of being random , 1996, Current Biology.

[50]  Yoh-Han Pao,et al.  Stochastic choice of basis functions in adaptive function approximation and the functional-link net , 1995, IEEE Trans. Neural Networks.

[51]  Tony A. Plate,et al.  Holographic reduced representations , 1995, IEEE Trans. Neural Networks.

[52]  Tony Plate,et al.  Holographic Recurrent Networks , 1992, NIPS.

[53]  Christopher G. Langton,et al.  Computation at the edge of chaos: Phase transitions and emergent computation , 1990 .

[54]  S. Wolfram Random sequence generation by cellular automata , 1986 .

[55]  J. J. Hopfield,et al.  “Neural” computation of decisions in optimization problems , 1985, Biological Cybernetics.

[56]  A. Odlyzko,et al.  Algebraic properties of cellular automata , 1984 .

[57]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[58]  H. Jaeger A tutorial on training recurrent neural networks, covering BPTT, RTRL, EKF and the "echo state network" approach , 2021 .

[59]  Jan M. Rabaey,et al.  Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware , 2021, ArXiv.

[60]  N. McDonald,et al.  Complete & Orthogonal Replication of Hyperdimensional Memory via Elementary Cellular Automata , 2019 .

[61]  Eric A. Weiss,et al.  The Hyperdimensional Stack Machine , 2018 .

[62]  Niklas Karvonen,et al.  Low-Power Classification using FPGA - An Approach based on Cellular Automata and Hyperdimensional Computing , 2018 .

[63]  Evgeny Osipov,et al.  Recognizing Permuted Words with Vector Symbolic Architectures: A Cambridge Test for Machines , 2016, BICA.

[64]  Özgür Yilmaz,et al.  Machine Learning Using Cellular Automata Based Feature Expansion and Reservoir Computing , 2015, J. Cell. Autom..

[65]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[66]  Mantas Lukosevicius,et al.  A Practical Guide to Applying Echo State Networks , 2012, Neural Networks: Tricks of the Trade.

[67]  Peter Tiño,et al.  Minimum Complexity Echo State Network , 2011, IEEE Transactions on Neural Networks.

[68]  David J. C. MacKay,et al.  Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.

[69]  Matthew Cook,et al.  Universality in Elementary Cellular Automata , 2004, Complex Syst..

[70]  Stephen Wolfram,et al.  A New Kind of Science , 2003, Artificial Life.

[71]  Ross W. Gayler,et al.  Multiplicative Binding, Representation Operators & Analogy , 1998 .

[72]  Pentti Kanerva,et al.  Fully Distributed Representation , 1997 .