Model Estimation and Selection towards Unconstrained Real-Time Tracking and Mapping.
暂无分享,去创建一个
We present an approach and prototype implementation to initialization-free real-time tracking and mapping that supports any type of camera motion in 3D environments, that is, parallax-inducing as well as rotation-only motions. Our approach effectively behaves like a keyframe-based Simultaneous Localization and Mapping system or a panorama tracking and mapping system, depending on the camera movement. It seamlessly switches between the two modes and is thus able to track and map through arbitrary sequences of parallax-inducing and rotation-only camera movements. The system integrates both model-based and model-free tracking, automatically choosing between the two depending on the situation, and subsequently uses the `Geometric Robust Information Criterion' to decide whether the current camera motion can best be represented as a parallax inducing motion or a rotation-only motion. It continues to collect and map data after tracking failure, thus creating separate tracks which are later merged if they are found to overlap. This is in contrast to most existing tracking and mapping systems, which suspend tracking and mapping and thus discard valuable data until relocalization with respect to the initial map is successful. We tested our prototype implementation on a variety of video sequences, successfully tracking through different camera motions and fully automatically building combinations of panoramas and 3D structure.