[POSTER] Vergence-Based AR X-ray Vision

The ideal AR x-ray vision should enable users to clearly observe and grasp not only occludees, but also occluders. We propose a novel selective visualization method of both occludee and oc-cluder layers with dynamic opacity depending on the user's gaze depth. Using the gaze depth as a trigger to select the layers has a essential advantage over using other gestures or spoken commands in the sense of avoiding collision between user's intentional commands and unintentional actions. Our experiment by a visual paired-comparison task shows that our method has achieved a 20% higher success rate, and significantly reduced 30% of the average task completion time than a non-selective method using a constant and half transparency.

[1]  Takeshi Oishi,et al.  Visibility-based blending for real-time applications , 2014, 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[2]  Ryutarou Ohbuchi,et al.  Merging virtual objects with the real world: seeing ultrasound imagery within the patient , 1992, SIGGRAPH.

[3]  Arindam Dey,et al.  An Augmented Reality X-Ray system based on visual saliency , 2010, 2010 IEEE International Symposium on Mixed and Augmented Reality.

[4]  Tomohiro Kuroda,et al.  Error Reduction in 3D Gaze Point Estimation for Advanced Medical Annotations , 2009 .

[5]  Ekkehard Euler,et al.  Superman-like X-ray vision: Towards brain-computer interfaces for medical augmented reality , 2012, 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).