Progressive Operational Perceptrons with Memory

Abstract Generalized Operational Perceptron (GOP) was proposed to generalize the linear neuron model used in the traditional Multilayer Perceptron (MLP) by mimicking the synaptic connections of biological neurons showing nonlinear neurochemical behaviours. Previously, Progressive Operational Perceptron (POP) was proposed to train a multilayer network of GOPs which is formed layer-wise in a progressive manner. While achieving superior learning performance over other types of networks, POP has a high computational complexity. In this work, we propose POPfast, an improved variant of POP that signicantly reduces the computational complexity of POP, thus accelerating the training time of GOP networks. In addition, we also propose major architectural modications of POPfast that can augment the progressive learning process of POP by incorporating an information preserving, linear projection path from the input to the output layer at each progressive step. The proposed extensions can be interpreted as a mechanism that provides direct information extracted from the previously learned layers to the network, hence the term “memory”. This allows the network to learn deeper architectures and better data representations. An extensive set of experiments in human action, object, facial identity and scene recognition problems demonstrates that the proposed algorithms can train GOP networks much faster than POPs while achieving better performance compared to original POPs and other related algorithms.

[1]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Alexandros Iosifidis,et al.  Heterogeneous Multilayer Generalized Operational Perceptron , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[3]  Xiao Zhang,et al.  Finding Celebrities in Billions of Web Images , 2012, IEEE Transactions on Multimedia.

[4]  Javier Bajo,et al.  Neural networks in distributed computing and artificial intelligence , 2018, Neurocomputing.

[5]  R. Masland Neuronal diversity in the retina , 2001, Current Opinion in Neurobiology.

[6]  Andrew Zisserman,et al.  Deep Face Recognition , 2015, BMVC.

[7]  Alexandros Iosifidis,et al.  Data-driven Neural Architecture Learning For Financial Time-series Forecasting , 2019, ArXiv.

[8]  Hau-San Wong,et al.  Adaptive activation functions in convolutional neural networks , 2018, Neurocomputing.

[9]  Juan Carlos Niebles,et al.  Modeling Temporal Structure of Decomposable Motion Segments for Activity Classification , 2010, ECCV.

[10]  Alexandros Iosifidis,et al.  Knowledge Transfer for Face Verification Using Heterogeneous Generalized Operational Perceptrons , 2019, 2019 IEEE International Conference on Image Processing (ICIP).

[11]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[12]  Dana H. Ballard,et al.  Modular Learning in Neural Networks , 1987, AAAI.

[13]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[14]  Alexandros Iosifidis,et al.  PyGOP: A Python library for Generalized Operational Perceptron algorithms , 2019, Knowl. Based Syst..

[15]  Hongming Zhou,et al.  Stacked Extreme Learning Machines , 2015, IEEE Transactions on Cybernetics.

[16]  Dejan J. Sobajic,et al.  Learning and generalization characteristics of the random vector Functional-link net , 1994, Neurocomputing.

[17]  Richard Bowden,et al.  Hollywood 3D: Recognizing Actions in 3D Natural Scenes , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Yurong Liu,et al.  A survey of deep neural network architectures and their applications , 2017, Neurocomputing.

[19]  Michael S. Lew,et al.  Deep learning for visual understanding: A review , 2016, Neurocomputing.

[20]  Alexandros Iosifidis,et al.  Learning to Rank: A Progressive Neural Network Learning Approach , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[21]  Yongquan Zhou,et al.  Functional networks and applications: A survey , 2019, Neurocomputing.

[22]  C. L. Philip Chen,et al.  Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[23]  Xuelong Li,et al.  Deep neural networks with Elastic Rectified Linear Units for object recognition , 2018, Neurocomputing.

[24]  Alexandros Iosifidis,et al.  Progressive Operational Perceptrons , 2017, Neurocomputing.

[25]  Antonio Torralba,et al.  Recognizing indoor scenes , 2009, CVPR.

[26]  Cordelia Schmid,et al.  Action Recognition with Improved Trajectories , 2013, 2013 IEEE International Conference on Computer Vision.

[27]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).