Influence of image quality on the identification of psyllids using convolutional neural networks

Convolutional Neural Networks (CNNs) usually require large datasets to be properly trained. Although techniques such as transfer learning can relax those requirements, gathering sufficient labelled data to cover all the variability associated to the problem at hand is often costly and time consuming. A way to minimise this challenge would be gathering the training data under laboratory conditions, using high quality sensors capable of generating images with superior resolution, sharpness and contrast. The downside of this approach is that the resulting dataset will most likely lack the variety that can be found under more realistic conditions. This work investigates this trade-off between image quality and dataset representativeness, that is, if a CNN trained with images captured by a scanner in laboratory would be able to reliably recognise psyllids in smartphone images captured under more realistic conditions. A total of 1276 images were used in the experiments, half acquired using a flatbed scanner and half acquired using two different brands of smartphones. Experiments were carried out using Squeezenet CNNs and a 10-fold cross-validation strategy. Accuracies ranged from less than 70% using only scanned images, to around 90% when only smartphone images were employed, indicating that more realistic conditions are essential to guarantee the robustness of the trained network. Scanned images were useful when the training set containing realistic images was not enough to cover all the variability found in the experiments, but were otherwise innocuous.

[1]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[2]  Jayme Garcia Arnal Barbedo,et al.  Factors influencing the use of deep learning for plant disease recognition , 2018, Biosystems Engineering.

[3]  Romain Raveaux,et al.  A survey on image-based insect classification , 2017, Pattern Recognit..

[4]  Saeid Minaei,et al.  Vision-based pest detection based on SVM classification method , 2017, Comput. Electron. Agric..

[5]  Tae-Soo Chon,et al.  Automatic identification and counting of small size pests in greenhouse conditions with low computational cost , 2015, Ecol. Informatics.

[6]  Michael H. Thomas,et al.  Citrus Greening Disease (Huanglongbing) in Florida: Economic Impact, Management and the Potential for Biological Control , 2016, Agricultural Research.

[7]  Yoshua Bengio,et al.  Deep Learning of Representations for Unsupervised and Transfer Learning , 2011, ICML Unsupervised and Transfer Learning.

[8]  Alejandro López,et al.  Combination of image processing and artificial neural networks as a novel approach for the identification of Bemisia tabaci and Frankliniella occidentalis on sticky traps in greenhouse agriculture , 2016, Comput. Electron. Agric..

[9]  Andreas Kamilaris,et al.  Deep learning in agriculture: A survey , 2018, Comput. Electron. Agric..

[10]  Amots Hetzroni,et al.  Development of an automatic monitoring trap for Mediterranean fruit fly (Ceratitis capitata) to optimize control applications frequency , 2017, Comput. Electron. Agric..

[11]  Marcel Salathé,et al.  Using Deep Learning for Image-Based Plant Disease Detection , 2016, Front. Plant Sci..

[12]  Haiyang Zhou,et al.  A smart-vision algorithm for counting whiteflies and thrips on sticky traps using two-dimensional Fourier transform spectrum , 2017 .

[13]  Wei Wu,et al.  Detection of aphids in wheat fields using a computer vision technique , 2016 .

[14]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  T. Hung,et al.  Detection by PCR of Candidatus Liberibacter asiaticus, the bacterium causing citrus huanglongbing in vector psyllids: application to the study of vector–pathogen relationships , 2004 .

[16]  José García,et al.  A Distributed K-Means Segmentation Algorithm Applied to Lobesia botrana Recognition , 2017, Complex..

[17]  Graham W. Taylor,et al.  Automatic moth detection from trap images for pest management , 2016, Comput. Electron. Agric..

[18]  Huajian Liu,et al.  A review of recent sensing technologies to detect invertebrates on crops , 2017, Precision Agriculture.

[19]  Jayme Garcia Arnal Barbedo,et al.  Using digital image processing for counting whiteflies on soybean leaves , 2014 .

[20]  Jian Tang,et al.  Automated Counting of Rice Planthoppers in Paddy Fields Based on Image Processing , 2014 .

[21]  Atsuto Maki,et al.  A systematic study of the class imbalance problem in convolutional neural networks , 2017, Neural Networks.

[22]  Ronald M. Summers,et al.  Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning , 2016, IEEE Transactions on Medical Imaging.

[23]  J. Yen,et al.  Evaluating the effectiveness of five sampling methods for detection of the tomato potato psyllid, Bactericera cockerelli (Šulc) (Hemiptera: Psylloidea: Triozidae) , 2013 .

[24]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[25]  J. Qureshi,et al.  Sampling Methods for Detection and Monitoring of the Asian Citrus Psyllid (Hemiptera: Psyllidae) , 2015, Environmental entomology.