Automated surgical skill assessment in RMIS training

PurposeManual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons’ schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating ‘task highlights’ which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score.MethodsWe perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data—sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features.ResultsOur results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task.ConclusionsHolistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.

[1]  Irfan A. Essa,et al.  Video and accelerometer-based motion analysis for automated surgical skills assessment , 2017, International Journal of Computer Assisted Radiology and Surgery.

[2]  Gregory D. Hager,et al.  String Motif-Based Description of Tool Motion for Detecting Skill and Gestures in Robotic Surgery , 2013, MICCAI.

[3]  Gregory D. Hager,et al.  Recognizing Surgical Activities with Recurrent Neural Networks , 2016, MICCAI.

[4]  Antonio Torralba,et al.  Assessing the Quality of Actions , 2014, ECCV.

[5]  Alexander J. Smola,et al.  Support Vector Regression Machines , 1996, NIPS.

[6]  René Vidal,et al.  Surgical Gesture Classification from Video Data , 2012, MICCAI.

[7]  Irfan A. Essa,et al.  Automated Assessment of Surgical Skills Using Frequency Analysis , 2015, MICCAI.

[8]  Marzieh Ershad,et al.  Meaningful Assessment of Surgical Expertise: Semantic Labeling with Data and Crowds , 2016, MICCAI.

[9]  S M Pincus,et al.  Approximate entropy as a measure of system complexity. , 1991, Proceedings of the National Academy of Sciences of the United States of America.

[10]  R. Reznick,et al.  Objective structured assessment of technical skill (OSATS) for surgical residents , 1997, The British journal of surgery.

[11]  Sotirios A. Tsaftaris,et al.  Medical Image Computing and Computer Assisted Intervention , 2017 .

[12]  Irfan A. Essa,et al.  Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Irfan Essa,et al.  Video Based Assessment of OSATS Using Sequential Motion Textures , 2014 .

[14]  Yachna Sharma,et al.  Automated video-based assessment of surgical skills for training and evaluation in medical schools , 2016, International Journal of Computer Assisted Radiology and Surgery.

[15]  Pavan K. Turaga,et al.  Dynamical Regularity for Action Analysis , 2015, BMVC.

[16]  Allison M. Okamura,et al.  Teleoperated versus open needle driving: Kinematic analysis of experienced surgeons and novice users , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[17]  Henry C. Lin,et al.  JHU-ISI Gesture and Skill Assessment Working Set ( JIGSAWS ) : A Surgical Activity Dataset for Human Motion Modeling , 2014 .

[18]  Ivan Laptev,et al.  On Space-Time Interest Points , 2005, International Journal of Computer Vision.

[19]  Gregory D. Hager,et al.  A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery , 2017, IEEE Transactions on Biomedical Engineering.

[20]  Gregory D. Hager,et al.  Sparse Hidden Markov Models for Surgical Gesture Classification and Skill Evaluation , 2012, IPCAI.

[21]  Ratna Babu Chinnam,et al.  Machine Learning Approach for Skill Evaluation in Robotic-Assisted Surgery , 2016, WCE 2016.