Location-based online identification of objects in the centre of visual attention using eye tracking
暂无分享,去创建一个
Modern mobile eye trackers calculate the point-of-regard relatively to the current image obtained by a scene-camera. They show where the wearer of the eye tracker is looking at in this 2D picture, but they fail to provide a link to the object of interest in the environment. To understand the context of the wearer’s current actions, human annotators therefore have to label the recorded fixations manually. This is very time consuming and also prevents an online interactive use in HCI. A popular scenario for mobile eye tracking are supermarkets. Gidlof et al. (2013) used this scenario to study the visual behaviour in a decision-process. De Beugher et al. (2012) developed an offline approach to automate the analysis of object identification. For usage of mobile eye tracking in an online recommender system (Pfeiffer et al., 2013), that supports the user in a supermarket, it is essential to identify the object of interest immediately. Our work addresses this issue by using location information to speed-up the identification of the fixated object and at the same time making detection results more robust.
[1] Stijn De Beugher,et al. Automatic analysis of eye-tracking data using object detection algorithms , 2012, UbiComp '12.
[2] Richard Dewhurst,et al. Using eye-tracking to trace a cognitive process: Gaze behavior during decision making in a natural environment , 2013 .
[3] Martin Meißner,et al. Mobile Recommendation Agents Making Online Use of Visual Attention Information at the Point of Sale , 2013 .