Multiple-entity based classification of airborne laser scanning data in urban areas

Abstract There are two main challenges when it comes to classifying airborne laser scanning (ALS) data. The first challenge is to find suitable attributes to distinguish classes of interest. The second is to define proper entities to calculate the attributes. In most cases, efforts are made to find suitable attributes and less attention is paid to defining an entity. It is our hypothesis that, with the same defined attributes and classifier, accuracy will improve if multiple entities are used for classification. To verify this hypothesis, we propose a multiple-entity based classification method to classify seven classes: ground, water, vegetation, roof, wall, roof element, and undefined object. We also compared the performance of the multiple-entity based method to the single-entity based method. Features have been extracted, in most previous work, from a single entity in ALS data; either from a point or from grouped points. In our method, we extract features from three different entities: points, planar segments, and segments derived by mean shift. Features extracted from these entities are inputted into a four-step classification strategy. After ALS data are filtered into ground and non-ground points. Features generalised from planar segments are used to classify points into the following: water, ground, roof, vegetation, and undefined objects. This is followed by point-wise identification of the walls and roof elements using the contextual information of a building. During the contextual reasoning, the portion of the vegetation extending above the roofs is classified as a roof element. This portion of points is eventually re-segmented by the mean shift method and then reclassified. Five supervised classifiers are applied to classify the features extracted from planar segments and mean shift segments. The experiments demonstrate that a multiple-entity strategy achieves slightly higher overall accuracy and achieves much higher accuracy for vegetation, in comparison to the single-entity strategy (using only point features and planar segment features). Although the multiple-entity method obtains nearly the same overall accuracy as the planar-segment method, the accuracy of vegetation improves by 3.3% with the rule-based classifier. The multiple-entity method obtains much higher overall accuracy and higher accuracy in vegetation in comparison to using only the point-wise classification method for all five classifiers. Meanwhile, we compared the performances of five classifiers. The rule-based method provides the highest overall accuracy at 97.0%. The rule-based method provides over 99.0% accuracy for the ground and roof classes, and a minimum accuracy of 90.0% for the water, vegetation, wall and undefined object classes. Notably, the accuracy of the roof element class is only 70% with the rule-based method, or even lower with other classifiers. Most roof elements have been assigned to the roof class, as shown in the confusion matrix. These erroneous assignments are not fatal errors because both a roof and a roof element are part of a building. In addition, a new feature which indicates the average point space within the planar segment is generalised to distinguish vegetation from other classes. Its performance is compared to the percentage of points with multiple pulse count in planar segments. Using the feature computed with only average point space, the detection rate of vegetation in a rule-based classifier is 85.5%, which is 6% lower than that with pulse count information.

[1]  Kurt Kubik,et al.  Fusing airborne laser scanner data and aerial imagery for the automatic extraction of buildings in densely built-up areas , 2004 .

[2]  Dimitri Lague,et al.  3D Terrestrial LiDAR data classification of complex natural scenes using a multi-scale dimensionality criterion: applications in geomorphology , 2011, ArXiv.

[3]  G. Priestnall,et al.  Extracting urban features from LiDAR digital surface models , 2000 .

[4]  Martial Hebert,et al.  Directional Associative Markov Network for 3-D Point Cloud Classification , 2008 .

[5]  David Suter,et al.  Multi-scale Conditional Random Fields for over-segmented irregular 3D point clouds classification , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[6]  G. Sithole,et al.  Recognising structure in laser scanning point clouds , 2004 .

[7]  C. Mallet,et al.  AIRBORNE LIDAR FEATURE SELECTION FOR URBAN CLASSIFICATION USING RANDOM FORESTS , 2009 .

[8]  David P. Helmbold,et al.  Aerial lidar data classification using expectation-maximization , 2007, Electronic Imaging.

[9]  Stan Z. Li,et al.  Markov Random Field Modeling in Image Analysis , 2001, Computer Science Workbench.

[10]  S. J. Oude Elberink,et al.  Entities and Features for Classifcation of Airborne Laser Scanning Data in Urban Area , 2012 .

[11]  Ben Taskar,et al.  Discriminative learning of Markov random fields for segmentation of 3D scan data , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[12]  George Vosselman,et al.  Bridge detection in airborne laser scanner data , 2006 .

[13]  G. Sohn,et al.  Random Forests Based Multiple Classifier System for Power-Line Scene Classification , 2012 .

[14]  Dorin Comaniciu,et al.  Mean Shift: A Robust Approach Toward Feature Space Analysis , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Behnaz Bigdeli,et al.  A Multiple Classifier System for Classification of LIDAR Remote Sensing Data Using Multi-class SVM , 2010, MCS.

[16]  H. Murakami,et al.  Change detection of buildings using an airborne laser scanner , 1999 .

[17]  Uwe Stilla,et al.  Object extraction based on 3D-segmentation of LiDAR data by combining mean shift with normalized cuts: Two examples from urban areas , 2009, 2009 Joint Urban Remote Sensing Event.

[18]  David P. Helmbold,et al.  Aerial LiDAR Data Classification Using Support Vector Machines (SVM) , 2006, Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06).

[19]  Martial Hebert,et al.  Onboard contextual classification of 3-D point clouds with learned high-order Markov Random Fields , 2009, 2009 IEEE International Conference on Robotics and Automation.

[20]  George Vosselman,et al.  Experimental comparison of filter algorithms for bare-Earth extraction from airborne laser scanning point clouds , 2004 .

[21]  John Trinder,et al.  Building detection by fusion of airborne laser scanner data and multi-spectral images : Performance evaluation and sensitivity analysis , 2007 .

[22]  P. Zingaretti,et al.  Performance evaluation of automated approaches to building detection in multi-source aerial data , 2010 .

[23]  George Vosselman,et al.  Building Reconstruction by Target Based Graph Matching on Incomplete Laser Data: Analysis and Limitations , 2009, Sensors.

[24]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[25]  George Vosselman,et al.  3D information extraction from laser point clouds covering complex road junctions , 2009 .

[26]  David Suter,et al.  3D terrestrial LIDAR classifications with super-voxels and multi-scale Conditional Random Fields , 2009, Comput. Aided Des..

[27]  David P. Helmbold,et al.  Aerial Lidar Data Classification using AdaBoost , 2007, Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007).

[28]  Behnaz Bigdeli,et al.  Automatic road extraction from LIDAR data based on classifier fusion , 2009, 2009 Joint Urban Remote Sensing Event.

[29]  Juha Hyyppä,et al.  CLASSIFICATION TREE BASED BUILDING DETECTION FROM LASER SCANNER AND AERIAL IMAGE DATA , 2007 .

[30]  Frédéric Bretar,et al.  3D segmentation of forest structure using a mean-shift based algorithm , 2010, 2010 IEEE International Conference on Image Processing.