This paper presents a system for introducing augmented reality (AR) enhancements into an image-based cubic panorama sequence. Panoramic cameras, such as the Point Gray Research Ladybug allow rapid capture and generation of panoramic sequences for users to navigate and view. Our AR system provides the ability for authors to add virtual content into the panoramic sequences. First, a user manually selects a planar region over which to add the content. Then the system automatically finds a matching planar region in all the panoramas, allowing the virtual content to propagate. No preconditioning of the imaged scene through the addition of physical markers is necessary. Instead, 3-D position information is obtained by matching interest-point features across the panoramic sequence. This paper presents an application of augmented reality algorithms to the unique case of pre-captured panoramic sequences.
[1]
G LoweDavid,et al.
Distinctive Image Features from Scale-Invariant Keypoints
,
2004
.
[2]
Derek Bradley,et al.
Image-based navigation in real environments using panoramas
,
2005,
IEEE International Workshop on Haptic Audio Visual Environments and their Applications.
[3]
Koichiro Deguchi,et al.
Head pose determination from one image using a generic model
,
1998,
Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.
[4]
Yan Ke,et al.
PCA-SIFT: a more distinctive representation for local image descriptors
,
2004,
CVPR 2004.
[5]
Bernhard P. Wrobel,et al.
Multiple View Geometry in Computer Vision
,
2001
.