Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality
暂无分享,去创建一个
Tovi Grossman | Ravin Balakrishnan | Di Laura Chen | Tovi Grossman | Ravin Balakrishnan | Di Laura Chen
[1] Miguel A. Nacenta,et al. Quantitative Measurement of Tool Embodiment for Virtual Reality Input Alternatives , 2019, CHI.
[2] Maneesh Agrawala,et al. SceneSuggest: Context-driven 3D Scene Design , 2017, ArXiv.
[3] Rubaiat Habib Kazi,et al. MagicalHands: Mid-Air Hand Gestures for Animating in VR , 2019, UIST.
[4] Takeo Igarashi,et al. A suggestive interface for 3D drawing , 2001, SIGGRAPH Courses.
[5] Sebastian Günther,et al. Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays , 2019, CHI.
[6] Abhishek Ranjan,et al. A suggestive interface for image guided 3D sketching , 2004, CHI.
[7] David Kim,et al. HoloDesk: direct 3d interactions with a situated see-through display , 2012, CHI.
[8] Meredith Ringel Morris,et al. ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures , 2009, ITS '09.
[9] Zhihan Lv,et al. Multimodal Hand and Foot Gesture Interaction for Handheld Devices , 2014, TOMM.
[10] Olivier Bau,et al. OctoPocus: a dynamic guide for learning gesture-based command sets , 2008, UIST '08.
[11] Steven K. Feiner,et al. Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality , 2003, ICMI '03.
[12] Dinesh K. Pai,et al. FootSee: an interactive animation system , 2003, SCA '03.
[13] Gregory D. Abowd,et al. Interaction techniques for ambiguity resolution in recognition-based interfaces , 2007, SIGGRAPH '07.
[14] Robert W. Lindeman,et al. Exploring natural eye-gaze-based interaction for immersive virtual reality , 2017, 2017 IEEE Symposium on 3D User Interfaces (3DUI).
[15] Hemant Bhaskar Surale,et al. Experimental Analysis of Barehand Mid-air Mode-Switching Techniques in Virtual Reality , 2019, CHI.
[16] Päivi Majaranta,et al. Gaze Interaction and Applications of Eye Tracking - Advances in Assistive Technologies , 2011 .
[17] Mark Billinghurst,et al. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality , 2018, CHI.
[18] Meredith Ringel Morris,et al. User-defined gestures for surface computing , 2009, CHI.
[19] Takeo Igarashi,et al. Global beautification of layouts with interactive ambiguity resolution , 2014, UIST.
[20] Mark Billinghurst,et al. Grasp-Shell vs gesture-speech: A comparison of direct and indirect natural interaction techniques in augmented reality , 2014, 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
[21] Paolo Dario,et al. A Survey of Glove-Based Systems and Their Applications , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
[22] Sylvain Paris,et al. 6D hands: markerless hand-tracking for computer aided design , 2011, UIST.
[23] Heedong Ko,et al. "Move the couch where?" : developing an augmented reality multimodal interface , 2006, 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality.
[24] Olivier Bau,et al. Arpège: learning multitouch chord gestures vocabularies , 2013, ITS.
[25] Arindam Dey,et al. The Effects of Sharing Awareness Cues in Collaborative Mixed Reality , 2019, Front. Robot. AI.
[26] Adam Fourney,et al. These Aren't the Commands You're Looking For: Addressing False Feedforward in Feature-Rich Software , 2015, UIST.
[27] Yoshifumi Kitamura,et al. GyroWand: IMU-based Raycasting for Augmented Reality Head-Mounted Displays , 2015, SUI.
[28] Zhihan Lv,et al. Multimodal Hand and Foot Gesture Interaction for Handheld Devices , 2013, TOMM.
[29] Patrick Olivier,et al. Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor , 2012, UIST.
[30] Sandra G. Hart,et al. Nasa-Task Load Index (NASA-TLX); 20 Years Later , 2006 .
[31] Tovi Grossman,et al. Medusa: a proximity-aware multi-touch tabletop , 2011, UIST.
[32] Hans-Werner Gellersen,et al. Gaze + pinch interaction in virtual reality , 2017, SUI.
[33] Andy Cockburn,et al. User-defined gestures for augmented reality , 2013, INTERACT.
[34] Scott E. Hudson,et al. Monte carlo methods for managing interactive state, action and feedback under uncertainty , 2011, UIST '11.
[35] Marcos Serrano,et al. Exploring the use of hand-to-face input for interacting with head-worn displays , 2014, CHI.
[36] Hans-Werner Gellersen,et al. Three-Point Interaction: Combining Bi-manual Direct Touch with Gaze , 2016, AVI.
[37] Scott E. Hudson,et al. A framework for robust and flexible handling of inputs with uncertainty , 2010, UIST.
[38] Oleg Spakov,et al. Enhanced gaze interaction using simple head gestures , 2012, UbiComp.
[39] Ramakrishnan Mukundan,et al. 3D gesture interaction for handheld augmented reality , 2014, SIGGRAPH ASIA Mobile Graphics and Interactive Applications.
[40] Jürgen Steimle,et al. More than touch: understanding how people use skin as an input surface for mobile computing , 2014, CHI.
[41] Steven K. Feiner,et al. SenseShapes: using statistical geometry for object selection in a multimodal augmented reality , 2003, The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings..
[42] Roope Raisamo,et al. Appropriateness of foot interaction for non-accurate spatial tasks , 2004, CHI EA '04.
[43] James Arvo,et al. Fluid sketches: continuous recognition and morphing of simple hand-drawn shapes , 2000, UIST '00.
[44] Jennifer Mankoff,et al. Providing integrated toolkit-level support for ambiguity in recognition-based interfaces , 2000, CHI Extended Abstracts.
[45] Elizabeth D. Mynatt,et al. Side views: persistent, on-demand previews for open-ended tasks , 2002, UIST '02.
[46] Desney S. Tan,et al. Skinput: appropriating the body as an input surface , 2010, CHI.
[47] Hyunjeong Kim,et al. Towards more natural digital content manipulation via user freehand gestural interaction in a living room , 2013, UbiComp.
[48] Yuta Sugiura,et al. CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display , 2017, VRST.
[49] Hans-Werner Gellersen,et al. Feet movement in desktop 3D interaction , 2014, 2014 IEEE Symposium on 3D User Interfaces (3DUI).
[50] Xiaojuan Ma,et al. VirtualGrasp: Leveraging Experience of Interacting with Physical Objects to Facilitate Digital Object Retrieval , 2018, CHI.
[51] Kevin L. Novins,et al. Polygon recognition in sketch-based interfaces with immediate and continuous feedback , 2003, GRAPHITE '03.
[52] Maria Isabel Saludares,et al. Interaction techniques using head gaze for virtual reality , 2016, 2016 IEEE Region 10 Symposium (TENSYMP).
[53] Robert Xiao,et al. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions , 2015, ICMI.
[54] Ian Williams,et al. Analysis of Medium Wrap Freehand Virtual Object Grasping in Exocentric Mixed Reality , 2016, 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
[55] Richard A. Bolt,et al. “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.
[56] Zhihan Lv,et al. Wearable Smartphone: Wearable Hybrid Framework for Hand and Foot Gesture Interaction on Smartphone , 2013, 2013 IEEE International Conference on Computer Vision Workshops.
[57] Andrew Wilson,et al. MirageTable: freehand interaction on a projected augmented reality tabletop , 2012, CHI.