YOLO based deep learning on needle-type dashboard recognition for autopilot maneuvering system

Developing a fully automatic auxiliary flying system with robot maneuvering is feasible. This study develops a control vision system that can read all kind of needle-type meters. The vision device in this study implements a modified YOLO-based object detection model to recognize the airspeed readings from the needle-type dashboard. With this approach, meter information in the cockpit is replaced by a single camera and a powerful edge-computer for future autopilot maneuvering purpose. A modified YOLOv4-tiny model by adding the Spatial Pyramid Pooling (SPP) and the Bidirectional Feature Pyramid Network (BAFPN) to the Neck region of the convolutional neural networks (CNN) structure is implemented. The Taguchi method for acquiring a set of optimum hyperparameters for the CNN is applied. An improved deep learning network with higher Mean Average precision (mAP) compared with conventional YOLOv4-tiny and possessing a higher Frames Per Second (FPS) value than YOLOv4 is deployed successfully. Established a self-control system using a camera to receive airspeed indications from the designed virtual needle-type dashboard. Moreover, the dashboard’s pointer is controlled by applying the proposed control method, which contains PID control in addition to the pointer’s rotation angle recognition. A modified YOLOv4-tiny model with a fabricated system for visual dynamical recognition control is implemented successfully. The feasibility of bettering mean accuracy precision and frame per second in achieving autopilot maneuvering is verified.

[1]  Zhong Zhou,et al.  BAF-Detector: An Efficient CNN-Based Detector for Photovoltaic Cell Defect Detection , 2022, IEEE Transactions on Industrial Electronics.

[2]  LianFeng Li,et al.  BAFPN: An Optimization for YOLO , 2021, 2021 IEEE International Symposium on Circuits and Systems (ISCAS).

[3]  Sriparna Banerjee,et al.  Faster R-CNN and YOLO based Vehicle detection: A Survey , 2021, 2021 5th International Conference on Computing Methodologies and Communication (ICCMC).

[4]  Yaduvir Singh,et al.  Machine Learning Based Transformer Health Monitoring Using IoT Edge Computing , 2020, 2020 5th International Conference on Computing, Communication and Security (ICCCS).

[5]  Yaduvir Singh,et al.  Real Time Baby Facial Expression Recognition Using Deep Learning and IoT Edge Computing , 2020, 2020 5th International Conference on Computing, Communication and Security (ICCCS).

[6]  Hong-Yuan Mark Liao,et al.  YOLOv4: Optimal Speed and Accuracy of Object Detection , 2020, ArXiv.

[7]  Shu Liu,et al.  Path Aggregation Network for Instance Segmentation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[8]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Jian Sun,et al.  Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  B. S. Lim,et al.  Optimal design of neural networks using the Taguchi method , 1995, Neurocomputing.