HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings

The basic building block of any eye tracking research is the eye fixations. These eye fixations depend on more fine data gathered by the eye tracker device, the raw gaze data. There are many algorithms that can be used to transform the raw gaze data into eye fixation. However, these algorithms require one or more thresholds to be set. A knowledge of the most appropriate values for these thresholds is necessary in order for these algorithms to generate the desired output. This paper examines the effect of a set of different settings of the two thresholds required for the identification-dispersion threshold type of algorithms: the dispersion and duration thresholds on the generated eye fixations. Since this work is at its infancy, the goal of this paper is to generate and visualize the result of each setting and leave the choice for the readers to decide on which setting fits their future eye tracking research.

[1]  Eva Hornecker,et al.  Investigating Expectations for Voice-based and Conversational Argument Search on the Web , 2020, CHIIR.

[2]  Jean-François Rouet,et al.  Effect of modality on collaboration with a dialogue system , 2007, Int. J. Hum. Comput. Stud..

[3]  J. Halamka,et al.  Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape , 2019, Canadian journal of psychiatry. Revue canadienne de psychiatrie.

[4]  Alex Olwal,et al.  SkinMarks: Enabling Interactions on Body Landmarks Using Conformal Skin Electronics , 2017, CHI.

[5]  Myounghoon Jeon,et al.  The influence of robot design on acceptance of social robots , 2017, 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

[6]  Fabio Crestani,et al.  Spoken Versus Written Queries for Mobile Information Access: An Experiment on Mandarin Chinese , 2004, IJCNLP.

[7]  Rafael A. Calvo,et al.  Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications , 2010, IEEE Transactions on Affective Computing.

[8]  Jun'ichiro Seyama,et al.  The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces , 2007, PRESENCE: Teleoperators and Virtual Environments.

[9]  Chris R. Brewin,et al.  Cognitive psychology and emotional disorders , 1989 .

[10]  Johan Schalkwyk,et al.  Voice search for development , 2010, INTERSPEECH.

[11]  Fabio Crestani,et al.  Written versus spoken queries: A qualitative and quantitative comparative analysis , 2006, J. Assoc. Inf. Sci. Technol..

[12]  Maneesh Agrawala,et al.  Zone and polygon menus: using relative position to increase the breadth of multi-stroke marking menus , 2006, CHI.

[13]  Roberto I. González-Ibáñez,et al.  Evaluating the synergic effect of collaboration in information seeking , 2011, SIGIR.

[14]  Leif Azzopardi,et al.  Conversational strategies : impact on search performance in a goal-oriented task , 2020 .

[15]  Daniel McDuff,et al.  Multimodal analysis of vocal collaborative search: a public corpus and results , 2017, ICMI.

[16]  Pei-Luen Patrick Rau,et al.  A Cross-cultural Study: Effect of Robot Appearance and Task , 2010, Int. J. Soc. Robotics.

[17]  Myounghoon Jeon,et al.  Love at first sight: Mere exposure to robot appearance leaves impressions similar to interactions with physical robots , 2017, 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[18]  Daniel McDuff,et al.  Style and Alignment in Information-Seeking Conversation , 2018, CHIIR.

[19]  Matthias Hagen,et al.  Toward Voice Query Clarification , 2018, SIGIR.

[20]  Li-Wei Chan,et al.  PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input , 2015, MobileHCI.

[21]  Filip Radlinski,et al.  Embedding Search into a Conversational Platform to Support Collaborative Search , 2019, CHIIR.

[22]  Josephine Lau,et al.  Alexa, Are You Listening? , 2018, Proc. ACM Hum. Comput. Interact..

[23]  Robert J. Teather,et al.  Pointing at 3d target projections with one-eyed and stereo cursors , 2013, CHI.

[24]  Matthias Hagen,et al.  Clarifying False Memories in Voice-based Search , 2019, CHIIR.

[25]  Myounghoon Jeon,et al.  EMOTIVE VOICE ACCEPTANCE IN HUMAN-ROBOT INTERACTION , 2018 .

[26]  Benno Stein,et al.  “PageRank” for Argument Relevance , 2017, EACL.

[27]  Enrico Coiera,et al.  Understanding and Measuring User Experience in Conversational Interfaces , 2019, Interact. Comput..

[28]  Aurélie Dommes,et al.  The Role of Cognitive Flexibility and Vocabulary Abilities of Younger and Older Users in Searching for Information on the Web , 2011 .

[29]  Ben Shneiderman,et al.  Direct Manipulation: A Step Beyond Programming Languages , 1983, Computer.

[30]  James H. Aylor,et al.  Computer for the 21st Century , 1999, Computer.

[31]  C. Frith,et al.  The Neural Basis of Mentalizing , 2006, Neuron.

[32]  Xiaojun Yuan,et al.  Factors Affecting User Perception of a Spoken Language vs. Textual Search Interface: A Content Analysis , 2015, Interact. Comput..

[33]  Gierad Laput,et al.  SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin , 2016, CHI.

[34]  Ravin Balakrishnan,et al.  Simple vs. compound mark hierarchical marking menus , 2004, UIST '04.

[35]  Christoph U. Lehmann,et al.  Electronic Health Record Interactions through Voice: A Review , 2018, Applied Clinical Informatics.

[36]  Sven Wachsmuth,et al.  What can I do for you? Appearance and Application of Robots , 2007 .

[37]  Daniel McDuff,et al.  Theories of Conversation for Conversational IR , 2021, ACM Trans. Inf. Syst..

[38]  Pierre Dragicevic,et al.  Earpod: eyes-free menu selection using touch input and reactive audio feedback , 2007, CHI.

[39]  Loren G. Terveen,et al.  Understanding How People Use Natural Language to Ask for Recommendations , 2017, RecSys.

[40]  Michael Stubbs,et al.  Conversational Style: Analyzing Talk among Friends , 1985 .

[41]  Martin Halvey,et al.  Investigating how conversational search agents affect user's behaviour, performance and search experience , 2018 .

[42]  Ying-Chao Tung,et al.  User-Defined Game Input for Smart Glasses in Public Space , 2015, CHI.

[43]  Greg Walsh,et al.  Factors Affecting Seniors' Perceptions of Voice-enabled User Interfaces , 2018, CHI Extended Abstracts.

[44]  Jürgen Steimle,et al.  More than touch: understanding how people use skin as an input surface for mobile computing , 2014, CHI.

[45]  Jessica A. Chen,et al.  Conversational agents in healthcare: a systematic review , 2018, J. Am. Medical Informatics Assoc..

[46]  Marcos Serrano,et al.  Exploring the use of hand-to-face input for interacting with head-worn displays , 2014, CHI.

[47]  Imed Zitouni,et al.  Understanding User Satisfaction with Intelligent Assistants , 2016, CHIIR.

[48]  Wai-Tat Fu,et al.  Searching for information on the web: Impact of cognitive aging, prior domain knowledge and complexity of the search problems , 2017, Inf. Process. Manag..

[49]  Gerhard Sagerer,et al.  Understanding social robots: A user study on anthropomorphism , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[50]  Tong Lu,et al.  iSkin: Flexible, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing , 2015, CHI.

[51]  Martin Pielot,et al.  PocketMenu: non-visual menus for touch screen devices , 2012, Mobile HCI.

[52]  Jean-François Rouet,et al.  Effects of Speech- and Text-Based Interaction Modes in Natural Language Human-Computer Dialogue , 2007, Hum. Factors.

[53]  Mark Sanderson,et al.  Results Presentation Methods for a Spoken Conversational Search System , 2015, NWSearch@CIKM.

[54]  Sebastian Lang,et al.  Research Issues for Designing Robot Companions: BIRON as a Case Study , 2004 .

[55]  Daqing He,et al.  Users ’ Perceived Difficulties and Corresponding Reformulation Strategies in Google Voice Search , 2016 .

[56]  Jean-François Rouet,et al.  Does textual feedback hinder spoken interaction in natural language? , 2010, Ergonomics.

[57]  Kim-Phuong L. Vu,et al.  Privacy Concerns for Use of Voice Activated Personal Assistant in the Public Space , 2015, Int. J. Hum. Comput. Interact..

[58]  Michael J Owren,et al.  Sounds of emotion: production and perception of affect-related vocal acoustics. , 2003, Annals of the New York Academy of Sciences.

[59]  R. Adolphs,et al.  Emotion Perception from Face, Voice, and Touch: Comparisons and Convergence , 2017, Trends in Cognitive Sciences.

[60]  Kerstin Dautenhahn,et al.  What is a robot companion - friend, assistant or butler? , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[61]  Daniel McDuff,et al.  MISC: A data set of information-seeking conversations , 2017 .

[62]  Sang Ho Yoon,et al.  Wearable textile input device with multimodal sensing for eyes-free mobile interaction during daily activities , 2016, Pervasive Mob. Comput..