A two-level computer vision-based information processing method for improving the performance of human–machine interaction-aided applications

The computer vision (CV) paradigm is introduced to improve the computational and processing system efficiencies through visual inputs. These visual inputs are processed using sophisticated techniques for improving the reliability of human–machine interactions (HMIs). The processing of visual inputs requires multi-level data computations for achieving application-specific reliability. Therefore, in this paper, a two-level visual information processing (2LVIP) method is introduced to meet the reliability requirements of HMI applications. The 2LVIP method is used for handling both structured and unstructured data through classification learning to extract the maximum gain from the inputs. The introduced method identifies the gain-related features on its first level and optimizes the features to improve information gain. In the second level, the error is reduced through a regression process to stabilize the precision to meet the HMI application demands. The two levels are interoperable and fully connected to achieve better gain and precision through the reduction in information processing errors. The analysis results show that the proposed method achieves 9.42% higher information gain and a 6.51% smaller error under different classification instances compared with conventional methods.

[1]  Yufeng Shu,et al.  Interactive design of intelligent machine vision based on human-computer interaction mode , 2020, Microprocess. Microsystems.

[2]  Thierry Chateau,et al.  An Embedded Computer-Vision System for Multi-Object Detection in Traffic Surveillance , 2019, IEEE Transactions on Intelligent Transportation Systems.

[3]  Yun Lin,et al.  Semantic Segmentation With Oblique Convolution for Object Detection , 2020, IEEE Access.

[4]  Mayrai Gindy,et al.  A computer vision based rebar detection chain for automatic processing of concrete bridge deck GPR data , 2020 .

[5]  Vladan Babovic,et al.  A computer vision-based approach to fusing spatiotemporal data for hydrological modeling , 2018, Journal of Hydrology.

[6]  Peter E.D. Love,et al.  Knowledge graph for identifying hazards on construction sites: Integrating computer vision with ontology , 2020, Automation in Construction.

[7]  Wei Fu,et al.  Faster-YOLO: An accurate and faster object detection method , 2020, Digit. Signal Process..

[8]  Yunheung Paek,et al.  Developing a custom DSP for vision based human computer interaction applications , 2018, Multimedia Tools and Applications.

[9]  Yue Zhang,et al.  SARD: Towards Scale-Aware Rotated Object Detection in Aerial Imagery , 2019, IEEE Access.

[10]  Hae-Bum Yun,et al.  Automatic Pavement Object Detection Using Superpixel Segmentation Combined With Conditional Random Field , 2018, IEEE Transactions on Intelligent Transportation Systems.

[11]  Xin He,et al.  On line detection of defective apples using computer vision system combined with deep learning methods , 2020 .

[12]  Jun Liu,et al.  A detection and recognition system of pointer meters in substations based on computer vision , 2020 .

[13]  Haytham Al-Feel,et al.  Distributed and scalable computing framework for improving request processing of wearable IoT assisted medical sensors on pervasive computing system , 2020, Comput. Commun..

[14]  Xiaochun Luo,et al.  Vision-based detection and visualization of dynamic workspaces , 2019, Automation in Construction.

[15]  Laurel D. Riek,et al.  Unseen Salient Object Discovery for Monocular Robot Vision , 2020, IEEE Robotics and Automation Letters.

[16]  Zhigang Xu,et al.  Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications , 2020, IEEE Sensors Journal.

[17]  Gunasekaran Manogaran,et al.  Wearable IoT Smart-Log Patch: An Edge Computing-Based Bayesian Deep Learning Network System for Multi Access Physical Monitoring System , 2019, Sensors.

[18]  Pietro Perona,et al.  One-shot learning of object categories , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Xiangshi Ren,et al.  Human-Engaged Computing: the future of Human–Computer Interaction , 2019, CCF Transactions on Pervasive Computing and Interaction.

[20]  Huihui Yu,et al.  A real time expert system for anomaly detection of aerators based on computer vision and surveillance cameras , 2020, J. Vis. Commun. Image Represent..

[21]  Han Wang,et al.  Vision-based navigation of an unmanned surface vehicle with object detection and tracking abilities , 2017, Machine Vision and Applications.

[22]  Shanshan Tu,et al.  Human-computer interaction based on face feature localization , 2020, J. Vis. Commun. Image Represent..

[23]  Joost van de Weijer,et al.  Review on computer vision techniques in emergency situations , 2017, Multimedia Tools and Applications.

[24]  Sergey Kulik,et al.  Using convolutional neural networks for recognition of objects varied in appearance in computer vision for intellectual robots , 2020 .

[25]  Wei Chen,et al.  A survey of traditional and deep learning-based feature descriptors for high dimensional data in computer vision , 2019, International Journal of Multimedia Information Retrieval.

[26]  Satoshi Goto,et al.  Lossy Compression for Embedded Computer Vision Systems , 2018, IEEE Access.

[27]  Aleksandra Kawala-Janik,et al.  Innovative Internet of Things-reinforced Human Recognition for Human-Machine Interaction Purposes , 2018 .

[28]  Marco Maggipinto,et al.  A Computer Vision-Inspired Deep Learning Architecture for Virtual Metrology Modeling With 2-Dimensional Data , 2018, IEEE Transactions on Semiconductor Manufacturing.