Semantic Segmentation of Mixed Crops using Deep Convolutional Neural Network

Estimation of in-field biomass and crop composition is important for both farmers and researchers. Using close-up high resolution images of the crops, crop species can be distinguished using image processing. In the current study, deep convolutional neural networks for semantic segmentation (or pixel-wise classification) of cluttered classes in RGB images was explored in case of catch crops and volunteer barley cereal. The dataset consisted of RGB images from a plot trial using oil radish as catch crops in barley. The images were captured using a high-end consumer camera mounted on a tractor. The images were manually annotated in 7 classes: oil radish, barley, weed, stump, soil, equipment and unknown. Data argumentation was used to artificially increase the dataset by transposing and flipping the images. A modified version of VGG-16 deep neural network was used. First, the last fully-connected layers were converted to convolutional layer and the depth was modified to cope with our number of classes. Secondly, a deconvolutional layer with a 32 stride was added between the last fully-connected layer and the softmax classification layer to ensure that the output layer has the same size as the input. Preliminary results using this network show a pixel accuracy of 79% and a frequency weighted intersection over union of 66%. These preliminary results indicate great potential in deep convolutional networks for segmentation of plant species in cluttered RGB images.