AR Tips: Augmented First-Person View Task Instruction Videos

This research investigates applying Augmented Reality (AR) visualisation of spatial cues in first-person view task instruction videos. Instructional videos are becoming popular, and are not only used in formal education and training, but even in everyday life as more people seek for how-to videos when they need help with instructions. However, video clips are 2D visualisation of the task space, sometimes making it hard for the viewer to follow and match the objects in the video to those in the real-world task space. We propose augmenting task instruction videos with 3D visualisation of spatial cues to overcome this problem, focusing on creating and viewing first-person view instruction videos. As a proof of concept, we designed and implemented a prototype system, called AR Tips, which allows users to capture and watch first-person view instructional videos on a wearable AR device, augmented with 3D visual cues shown in-situ at the task environment. Initial feedback from potential end users indicate that the prototype system is very easy to use and could be applied to various scenarios.

[1]  Dima Damen,et al.  Automated capture and delivery of assistive task guidance with an eyewear computer: the GlaciAR system , 2016, AH.

[2]  Steven K. Feiner,et al.  Knowledge-based augmented reality , 1993, CACM.

[3]  Mark Billinghurst,et al.  Tag it!: AR annotation using wearable sensors , 2015, SIGGRAPH Asia Mobile Graphics and Interactive Applications.

[4]  Steven K. Feiner,et al.  Exploring the Benefits of Augmented Reality Documentation for Maintenance and Repair , 2011, IEEE Transactions on Visualization and Computer Graphics.

[5]  Werner Hartmann,et al.  Authoring of a mixed reality assembly instructor for hierarchical structures , 2003, The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings..

[6]  Mark Billinghurst,et al.  An evaluation of wearable information spaces , 1998, Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180).

[7]  Dima Damen,et al.  You-Do, I-Learn: Egocentric unsupervised discovery of objects and their modes of interaction towards video-based guidance , 2016, Comput. Vis. Image Underst..

[8]  Hideo Saito,et al.  Task support system by displaying instructional video onto AR workspace , 2010, 2010 IEEE International Symposium on Mixed and Augmented Reality.