An efficient parallel yet pipelined reconfigurable architecture for M-PLN Weightless Neural Networks

Weightless Neural Networks (WNNs) are a powerful mechanism for pattern recognition. Aiming at enhancing their learning capabilities, Multivalued Probabilistic Logic Neurons (M-PLN) are used, instead of crisp neurons with a 0/1 output. An M-PLN bookkeeps a triggering probability for each input pattern to be recognized. The M-PLN model attempts to strengthen the discrepancies between distinct patterns used during the training process and those that have not yet been processed. In this paper, an efficient yet customizable hardware architecture for M-PLN based WNN is proposed. It implements the structure and learning process of a weightless pyramidal WNN, augmented by a probabilistic rewarding/punishing search algorithm. The training algorithm can adapt itself to the overall hit ratio so far achieved by the network. Using class-dedicated layers, the hardware is able to handle image classification in parallel and thus, very efficiently. Furthermore, the classification process is performed in a pipelined manner so its stages never stop working until all input images are classified. Nonetheless, only one of these layers is active during network training. Last but not least, the architecture is customizable as its structure can be tailored in accordance to the application characteristics. It was modeled and functionally tested. Estimated time requirements based on many simulations are reported. The architecture exhibits performance and reconfiguration capabilities that are very promising and encouraging towards the synthesis of a prototype.