Brain-inspired computing and machine learning

Recent research in machine learning (ML) and neurophysiology has focused in the development of highly intelligent algorithms, utilizing information processing principles of the human brain. Deep learning is inspired by the architecture of the cerebral cortex and it has attracted the attention of many artificial intelligence (AI) scientists. It is the dominating AI approach in specific domains (e.g., image–voice classification, object detection) regardless of its requirements in high computational power and in high volume of data. This is the editorial of the ‘‘Brain Inspired Computing and Machine Learning’’ Special Issue (SI) of the Neural Computing and Applications Springer Journal. The response of the scientific community has been significant, as many original research papers have been submitted for consideration. Totally, 11 papers were accepted out of 20, after going through a peer-review process. All of them have significant elements of novelty and they are introducing interesting modeling approaches or algorithms, inspired by the biological processes of the human brain. The first paper is entitled ‘‘Operational Neural Networks’’ and it is authored by Serkan Kiranyaz, Department of Electrical Engineering, the Qatar University, Turker Ince, Department of Electrical & Electronics Engineering, Izmir University of Economics, Turkey, Alexandros Iosifidis Department of Engineering, Aarhus University, Denmark and Moncef Gabbouj, Department of Computing Sciences, Tampere University, Finland. The authors are introducing a new heterogeneous machine learning model, called operational neural network (NN). It can encapsulate neurons with any set of operators, in order to boost diversity and to learn highly complex and multimodal functions or spaces. This can result in minimal network complexity, requiring small volume of training data. A comparative analysis with convolutional NN models is performed to prove the efficiency of the proposed architecture. The second paper is authored by Ioannis Livieris, Department of Mathematics, University of Patras, Greece. The title is ‘‘An Advanced Active set L-BFGS Algorithm for Training Weight Constrained Neural Networks.’’ This research proposes a novel Advanced Active set Limited Memory—Broyden, Fletcher, Goldfarb, Shanno (AA-LBFGS) algorithm, for efficiently training weight-constrained neural networks. It approximates the curvature of the error function with high-order accuracy. This is achieved by utilizing the theoretically advanced secant condition, as it was proposed by Livieris and Pintelas [1]. Moreover, the global convergence of the proposed algorithm is established provided that the line search satisfies the modified Armijo condition [2]. Antonios Karatzoglou, Robert Bosch GmbH, Chassis Systems Control, Advance Engineering Germany, Nikolai Schnell and Michael Beiglfrom, Karlsruhe Institute of Technology Germany, are the authors of the third paper. It is entitled ‘‘Applying Depthwise Separable and MultiChannel Convolutional Neural Networks of Varied Kernel Size on Semantic Trajectories.’’ The authors explore the performance of convolutional neural networks (CNN) with respect to their capability in modeling semantic trajectories and predicting future locations in a location prediction scenario. The proposed approach comprises of three major parts. Initially, a standard single-channel approach is evaluated and compared with a feedforward NN, a recurrent NN and a long short-term memory one. Then, the & Lazaros S. Iliadis liliadis@civil.duth.gr