DistriHD: A Memory Efficient Distributed Binary Hyperdimensional Computing Architecture for Image Classification

Hyper-Dimensional (HD) computing is a brain-inspired learning approach for efficient and fast learning on today's embedded devices. HD computing first encodes all data points to high-dimensional vectors called hypervectors and then efficiently performs the classification task using a well-defined set of operations. Although HD computing achieved reasonable performances in several practical tasks, it comes with huge memory requirements since the data point should be stored in a very long vector having thousands of bits. To alleviate this problem, we propose a novel HD computing architecture, called DistriHD which enables HD computing to be trained and tested using binary hypervectors and achieves high accuracy in single-pass training mode with significantly low hardware resources. DistriHD encodes data points to distributed binary hypervectors and eliminates the expensive item memory in the encoder, which significantly reduces the required hardware cost for inference. Our evaluation also shows that our model can achieve a $27.6\times$ reduction in memory cost without hurting the classification accuracy. The hardware implementation also demonstrates that DistriHD achieves over $9.9\times$ and $28.8\times$ reduction in area and power, respectively.

[1]  Hiromitsu Awano,et al.  BloomCA: A Memory Efficient Reservoir Computing Hardware Implementation Using Cellular Automata and Ensemble Bloom Filter , 2021, 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[2]  Mohsen Imani,et al.  OnlineHD: Robust, Efficient, and Single-Pass Online Learning Using Hyperdimensional System , 2021, 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[3]  Tajana Simunic,et al.  tiny-HD: Ultra-Efficient Hyperdimensional Computing Engine for IoT Applications , 2021, 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[4]  Alec Xavier Manabat,et al.  Performance Analysis of Hyperdimensional Computing for Character Recognition , 2019, 2019 International Symposium on Multimedia and Communication Technology (ISMAC).

[5]  Tajana Simunic,et al.  A Framework for Collaborative Learning in Secure High-Dimensional Space , 2019, 2019 IEEE 12th International Conference on Cloud Computing (CLOUD).

[6]  Tajana Simunic,et al.  BRIC: Locality-based Encoding for Energy-Efficient Brain-Inspired Hyperdimensional Computing , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).

[7]  Farinaz Koushanfar,et al.  SparseHD: Algorithm-Hardware Co-optimization for Efficient High-Dimensional Computing , 2019, 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM).

[8]  Fan Wu,et al.  A Binary Learning Framework for Hyperdimensional Computing , 2019, 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[9]  Tajana Rosing,et al.  Hierarchical Hyperdimensional Computing for Energy Efficient Classification , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[10]  Jan M. Rabaey,et al.  A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing , 2016, ISLPED.

[11]  Pentti Kanerva,et al.  Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors , 2009, Cognitive Computation.