Advances in Cognitive Engineering Using Neural Networks

Recently, deep learning methods have been very popular in various fields and are providing state-of-the-art performance, especially in perception-related fields. Most deep neural models are inspired by low-level information processing mechanisms in the brain, suitable for signal processing involved in perception or motor control. However, so far, human-like higher cognitive functions, including knowledge representation, thinking, and reasoning have not yet been fully explored. Deeper understanding of high-level cognitive processes in the brain should allow for incorporation of more intelligent functionality into artificial cognitive systems, linking sensory perception and thinking. Cognitive Engineering is an interdisciplinary approach to the development of principles, methods, tools and techniques to guide the design of computerized systems intended to support and learn from human cognitive performance. Cognitive Engineering draws on the disciplines of cognitive science, computer science, systems engineering, human computer interaction, and related fields. The goal of this field is to develop systems that are easy to learn, easy to use, and lead to improved human–computer interaction system performance. In the past decade, the field has gained prominence in response to the proliferation of computers in everyday life. Safety-critical systems have become more complex and more integrated with advanced computer technology; thus novel design principles are needed to ensure that teams of human experts can operate computer systems safely and efficiently. Cognitive engineering helps to develop human-friendly and reliable computer systems by explicitly considering human cognitive processing characteristics in the context of computer-assisted tasks. In recent years, significant progress has beenmade in cognitive engineering by focusing on howusers actually interact with complex technical systems, including advanced human–computer interfaces. As a result, cognitive engineering has become a recognized interdisciplinary field as the interface of cognitive science, computer science, and engineering. This special issue of Neural Networks on ‘‘Advances in Cognitive Engineering Using Neural Networks’’ contains nine novel articles aimed at delivering reports on the latest research related to this field. It discusses the present state of the art, and outlines directions for future developments. The articles are in the fields of human cognitive behavior, the brain–computer interface, and personal space protection. They focus on topics of salience detection, intrusion detection, speech emotion recognition, human pose estimation, use of EEGs for brain–computer interface classification, wearable sensors, predictive models in robotics, use of EEGs for advertising preference prediction, and human intention understanding. Although diverse, the common thread that runs through these papers is that they deal with topics that currently present a challenge to human cognition. Ahmadi and Tani develop a multi-level neural network to address robustness issues during imitation learning. The applied recurrent neural network employs multiple time scales to implement a predictive coding scheme. The developed model is tested using simulated data and also using robotic testbeds. The results show the strength of the approach under naturally changing conditions. The introduced results demonstrate the advantages of the proposed learning paradigm using NAO humanoid robot performing various imitation tasks. Lee and colleagues describe a dual memory architecture motivated by the structure and operation of human brains. Specifically, the proposed approach models a postulated gradual adaptation in the neocortex and rapid learning in the hippocampus. The authors aim to address some shortcomings of deep learning manifested in the learning bottleneck and catastrophic forgetting. The performance of the model is tested using various benchmark data sequences, such as image data stream of CIFAR10 frames, aswell as Google glass lifelog data. The results show marked improvements with respect to alternative approaches. Kim and colleagues, inspired by psychological and neurological phenomena in humans, introduce an intention understanding system by connecting perception and action learning in artificial agents. To recognize human intention without verbal interactions, artificial agents (i.e., robots) should be able to understand human actions and affordance of corresponding objects simultaneously. To address these issue, they introduced object augmentedsupervised multiple timescale recurrent neural network which is trained using perception-action connected learning. They performed experiments to demonstrate usefulness of their model and its corresponding training using perception-action connected learning. Witoonchart and Chongstitvatana formulate a structured SVMmodel for deep learning as a conventional convolutional neural network, with back-propagation of error, combinedwith a lossaugmented inference layer. When applied to the estimation of human joint positions in 2D still images for human pose estimation, themodel operates as a deformable part model. Thus the approach treats the deformable part model as an instance of structured SVM learning, and is able to jointly learn both structural and appearance model parameters. By using a neural network’s innate ability to back-propagate errors to lower layers, the structured SVM model exactly calculates the structured SVM loss. Zhang and colleagues introduce a cognitive attentional model for bottom-up saliency detection based on two distinct stages.