A Comprehensive Performance Evaluation of 3-D Transformation Estimation Techniques in Point Cloud Registration

3-D local feature extraction and matching is a key step in point cloud registration. However, this process commonly draws into false correspondences caused by noise, occlusion, incomplete surface, and so on. To estimate correct transformation based on these corrupted correspondences, numerous transformation estimation techniques have been proposed. However, no comprehensive study comparing their accuracy, robustness, and efficiency performance under different nuisances has been conducted. This article evaluates thirteen popular transformation estimation proposals on both descriptor-based and synthetic correspondences. On descriptor-based correspondences, comprehensive evaluation items (e.g., combining with iterative closest point (ICP) and different local features) of these methods are tested on five popular datasets acquired with different devices (e.g., Minolta vivid scanner, Microsoft Kinect, and Space Time Stereo). On synthetic correspondences, the robustness of these methods to varying percentages of correct correspondences (PCCs) is evaluated. In addition, their efficiency is also evaluated. The results present some valuable findings that may provide a supplement to existing evaluations of transformation estimation techniques. A summary of merits, demerits, and application guidance of these tested methods is finally presented to guide real-world applications and new transformation estimation techniques crafting.