Using relative head and hand-target features to predict intention in 3D moving-target selection
暂无分享,去创建一个
Jonathan W. Kelly | James H. Oliver | Samir Garbaya | Frédéric Mérienne | Juan Sebastián Casallas | F. Mérienne | J. Oliver | S. Garbaya
[1] Carl Gutwin,et al. The Effects of Feedback on Targeting with Multiple Moving Targets , 2004, Graphics Interface.
[2] Ravin Balakrishnan,et al. Fitts' law and expanding targets: Experimental studies and designs for user interfaces , 2005, TCHI.
[3] Alberto Maria Segre,et al. Programs for Machine Learning , 1994 .
[4] Wen-Huang Cheng,et al. AttachedShock: facilitating moving targets acquisition on augmented reality devices using goal-crossing actions , 2012, ACM Multimedia.
[5] Dominique Bechmann,et al. SPEED: prédiction de cibles , 2011, IHM.
[6] Jie Zhu,et al. Head orientation and gaze direction in meetings , 2002, CHI Extended Abstracts.
[7] Tovi Grossman,et al. Comet and target ghost: techniques for selecting moving targets , 2011, CHI.
[8] James D. Foley,et al. The human factors of computer graphics interaction techniques , 1984, IEEE Computer Graphics and Applications.
[9] Rainer Stiefelhagen,et al. Pointing gesture recognition based on 3D-tracking of face, hands and head orientation , 2003, ICMI '03.
[10] P. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. , 1954, Journal of experimental psychology.
[11] Raimund Dachselt,et al. Use your head: tangible windows for 3D information spaces in a tabletop environment , 2012, ITS.
[12] Emmanuel Pietriga,et al. High-precision pointing on large wall displays using small handheld devices , 2013, CHI.
[13] P. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. 1954. , 1992, Journal of experimental psychology. General.
[14] Jonathan W. Kelly,et al. Towards a Model for Predicting Intention in 3D Moving-Target Selection Tasks , 2013, HCI.
[15] R. Hyman. Stimulus information as a determinant of reaction time. , 1953, Journal of experimental psychology.
[16] Colin Potts,et al. Design of Everyday Things , 1988 .
[17] Diogo Cabral,et al. Real-time annotation of video objects on tablet computers , 2012, MUM.
[18] Michael Ortega-Binderberger,et al. Rake cursor: improving pointing performance with concurrent input channels , 2009, CHI.
[19] Ivan Poupyrev,et al. 3D User Interfaces: Theory and Practice , 2004 .
[20] Michael Victor Ilich,et al. Moving Target Selection in Interactive Video , 2009 .
[21] Frits H. Post,et al. IntenSelect: using dynamic object rating for assisting 3D object selection , 2005, EGVE'05.
[22] R. Johansson,et al. Eye–Hand Coordination in Object Manipulation , 2001, The Journal of Neuroscience.
[23] Yves Guiard,et al. Fitts' law 50 years later: applications and contributions from human-computer interaction , 2004, Int. J. Hum. Comput. Stud..
[24] Vincent Pierlot,et al. I-see-3D ! An interactive and immersive system that dynamically adapts 2D projections to the location of a user's eyes , 2012, 2012 International Conference on 3D Imaging (IC3D).
[25] Mohammed Waleed Kadous,et al. Temporal classification: extending the classification paradigm to multivariate time series , 2002 .
[26] J. Ross Quinlan,et al. C4.5: Programs for Machine Learning , 1992 .
[27] B J McFadyen,et al. Visuomotor control when reaching toward and grasping moving targets. , 1996, Acta psychologica.
[28] Edward Lank,et al. Endpoint prediction using motion kinematics , 2007, CHI.
[29] Ian H. Witten,et al. The WEKA data mining software: an update , 2009, SKDD.
[30] M. Just,et al. The role of eye-fixation research in cognitive psychology , 1976 .
[31] Yves Guiard,et al. Preface: Fitts' law 50 years later: Applications and contributions from human-computer interaction , 2004 .
[32] Lei Liu,et al. Insights from Dividing 3D Goal-Directed Movements into Meaningful Phases , 2009, IEEE Computer Graphics and Applications.
[33] J. Krakauer,et al. Error correction, sensory prediction, and adaptation in motor control. , 2010, Annual review of neuroscience.
[34] H. Jorke,et al. Advanced Stereo Projection Using Interference Filters , 2008, 2008 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video.
[35] Anthony Steed,et al. An assessment of eye-gaze potential within immersive virtual environments , 2007, TOMCCAP.
[36] Yoshifumi Kitamura,et al. Two-Part Models Capture the Impact of Gain on Pointing Performance , 2012, TCHI.
[37] Judy M. Vance,et al. VR JuggLua: A framework for VR applications combining Lua, OpenSceneGraph, and VR Juggler , 2012, 2012 5th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS).
[38] Carl Gutwin,et al. Assessing target acquisition and tracking performance for complex moving targets in the presence of latency and jitter , 2012, Graphics Interface.
[39] Laurence Nigay,et al. Using the user's point of view for interaction on mobile devices , 2011, IHM.
[40] H. Wilson,et al. Perception of head orientation , 2000, Vision Research.
[41] Thomas G. Dietterich. What is machine learning? , 2020, Archives of Disease in Childhood.
[42] Michael Ortega,et al. Hook: Heuristics for selecting 3D moving objects in dense target environments , 2013, 2013 IEEE Symposium on 3D User Interfaces (3DUI).
[43] David Noy. Predicting user intentions in graphical user interfaces using implicit disambiguation , 2001, CHI Extended Abstracts.
[44] Sidney S. Fels,et al. Moving Target Selection in 2D Graphical User Interfaces , 2011, INTERACT.
[45] Shumin Zhai,et al. Manual and gaze input cascaded (MAGIC) pointing , 1999, CHI '99.