Tracking of row structure in three crops using image analysis

Abstract This paper is concerned with visual sensing for an autonomous crop protection robot, in particular the problem of finding guidance information from crop row structure. A robust method for finding crop rows in images has been presented in the author's previous work and is briefly re-introduced here. This is based on the Hough transform but, unlike previous methods, integrates information over a number of crop rows making the technique very tolerant to such problems as missing plants and weeds. Experiments are reported where the method is tested on three crops; cauliflowers, sugar beet, and widely spaced double rows of wheat. When compared with a human assessment of the row positions in images, typical errors were 18 mm of lateral offset and 1° of angle.