Halo Content: Context-aware Viewspace Management for Non-invasive Augmented Reality

In mobile augmented reality, text and content placed in a user's immediate field of view through a head worn display can interfere with day to day activities. In particular, messages, notifications, or navigation instructions overlaid in the central field of view can become a barrier to effective face-to-face meetings and everyday conversation. Many text and view management methods attempt to improve text viewability, but fail to provide a non-invasive personal experience for the user. In this paper, we introduce Halo Content, a method that proactively manages movement of multiple elements such as e-mails, texts, and notifications to make sure they do not interfere with interpersonal interactions. Through a unique combination of face detection, integrated layouts, and automated content movement, virtual elements are actively moved so that they do not occlude conversation partners' faces or gestures. Unlike other methods that often require tracking or prior knowledge of the scene, our approach can deal with multiple conversation partners in unknown, dynamic situations. In a preliminary experiment with 14 participants, we show that the Halo Content algorithm results in a 54.8% reduction in the number of times content interfered with conversations compared to standard layouts.

[1]  Jorma Laaksonen,et al.  An augmented reality interface to contextual information , 2011, Virtual Reality.

[2]  Gino van den Bergen Proximity Queries and Penetration Depth Computation on 3D Game Objects , 2001 .

[3]  Steven K. Feiner,et al.  View management for virtual and augmented reality , 2001, UIST '01.

[4]  Dieter Schmalstieg,et al.  Hedgehog labeling: View management techniques for external labels in 3D space , 2014, 2014 IEEE Virtual Reality (VR).

[5]  Desney S. Tan,et al.  Effects of Visual Separation and Physical Discontinuities when Distributing Information across Multiple Displays , 2003, INTERACT.

[6]  Pourang Irani,et al.  The personal cockpit: a spatial interface for effective task switching on head-worn displays , 2014, CHI.

[7]  Tom Drummond,et al.  Semi-automatic Annotations in Unknown Environments , 2007, 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.

[8]  Tobias Ruf,et al.  EDIS - Emotion-Driven Interactive Systems , 2013 .

[9]  Fan Zhang,et al.  Dynamic labeling management in virtual and augmented environments , 2005, Ninth International Conference on Computer Aided Design and Computer Graphics (CAD-CG'05).

[10]  Naokazu Yokoya,et al.  View management of annotations for wearable augmented reality , 2009, 2009 IEEE International Conference on Multimedia and Expo.

[11]  Jun Rekimoto,et al.  Peripheral vision annotation: noninterference information presentation method for mobile augmented reality , 2011, AH '11.

[12]  Knut Hartmann,et al.  Floating Labels: Applying Dynamic Potential Fields for Label Layout , 2004, Smart Graphics.

[13]  Kiyoshi Kiyokawa,et al.  Dynamic text management for see-through wearable and heads-up display systems , 2013, IUI '13.

[14]  Hsiao-Chien Tsai Safety view management for augmented reality based on MapReduce strategy on multi-core processors , 2013, 2013 13th International Conference on ITS Telecommunications (ITST).