Using Algebra of Hyper-Dimensional Vectors for Heuristic Representation of Data While Training Wide Neural Networks

The method of heuristic data representation in broad artificial neural networks training using the algebra of binary vectors of large length (hyper-size vectors) is offered in the article. The idea of data decentralization in a binary vector is the basis for a data hyper-size representation. The usage of a traditional data structure presupposes that a binary vector can be divided into two parts where each of them stores some specific value of a data structure field. Such vector division is impossible in a decentralized representation. Each bit of a hyper-size vector stores to some extent structure fields all at once and at the same time all bits of a hyper-size vector are necessary for getting a specific value of a field. Such a representation of data is similar to a traditional one for brain natural neural networks. According to the authors' view point its application will allow to improve the recognition quality and in the future it can lead to the creation of crucially new methodology of machine training and artificial intelligence.

[1]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[3]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[4]  Anders Holst,et al.  Random indexing of text samples for latent semantic analysis , 2000 .

[5]  Sandhya Samarasinghe,et al.  Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition , 2006 .

[6]  Björn W. Schuller,et al.  Introducing CURRENNT: the munich open-source CUDA recurrent neural network toolkit , 2015, J. Mach. Learn. Res..

[7]  Dmitry Pashchenko,et al.  The methodology of multicriterial assessment of Petri nets’ apparatus , 2016 .

[8]  Sven Behnke,et al.  Large-scale object recognition with CUDA-accelerated hierarchical neural networks , 2009, 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems.

[9]  Yu Cao,et al.  Throughput-Optimized OpenCL-based FPGA Accelerator for Large-Scale Convolutional Neural Networks , 2016, FPGA.

[10]  A. D. Ivannikov,et al.  High-dimensional neural-network artificial intelligence capable of quick learning to recognize a new smell, and gradually expanding the database , 2016, 2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC).

[11]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[12]  Massimiliano Di Ventra,et al.  Experimental demonstration of associative memory with memristive neural networks , 2009, Neural Networks.

[13]  Dumitru Erhan,et al.  Deep Neural Networks for Object Detection , 2013, NIPS.

[14]  Pentti Kanerva,et al.  Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors , 2009, Cognitive Computation.

[15]  Fei-Fei Li,et al.  Large-Scale Video Classification with Convolutional Neural Networks , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[17]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[18]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[19]  S. A. Zinkin,et al.  Directly executable formal models of middleware for MANET and Cloud Networking and Computing , 2016 .