Automated Visual Input Analysis for a Telediagnostic Robot System

The European project ReMeDi aims to develop a system that allows doctors to remotely perform physical and ultrasonography examinations by teleoperating a multifunctional robotic device at the patient side. In excess of teleconferencing, haptic interfaces, force-feedback and multisensory data representing the remote environment provide proactive support for the doctor. Computer Vision methods are emplaced to serve the critical need of perceiving the patient and his distinct pose to estimate the position of the end effector with respect to the body. Also the environment is observed to provide information about the robot’s workspace. Furthermore we approach the problem of analyzing the patients facial expression to assess his emotional state and pain level during examination with computer vision techniques. The interactive nature of the application requires the RGB and depth data (acquired by a Kinect mounted on the robot platform and a second camera aimed at the patient’s face) to be processed in real-time. Therefore our research focus lies on developing and evaluating both efficient and robust algorithms based on machine learning to meet these requirements.

[1]  Manfred Tscheligi,et al.  Towards a remote medical diagnostician for medical examination , 2014 .

[2]  Ross B. Girshick,et al.  Efficient Human Pose Estimation from Single Depth Images , 2013, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Andrew Blake,et al.  Efficient Human Pose Estimation from Single Depth Images , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Andrew W. Fitzgibbon,et al.  Metric Regression Forests for Human Pose Estimation , 2013, BMVC.

[5]  Andrew W. Fitzgibbon,et al.  The Vitruvian manifold: Inferring dense correspondences for one-shot human pose estimation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Luc Van Gool,et al.  Real-time facial feature detection using conditional regression forests , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Ruigang Yang,et al.  Accurate 3D pose estimation from a single depth image , 2011, 2011 International Conference on Computer Vision.

[8]  Maja Pantic,et al.  Machine analysis of facial behaviour: naturalistic and dynamic behaviour , 2009, Philosophical Transactions of the Royal Society B: Biological Sciences.

[9]  K. Prkachin Assessing pain by facial expression: facial expression as nexus. , 2009, Pain research & management.

[10]  Dragomir Anguelov,et al.  SCAPE: shape completion and animation of people , 2005, ACM Trans. Graph..