Knowledge Extraction from Task Narratives

One of the major difficulties in activity recognition stems from the lack of a model of the world where activities and events are to be recognised. When the domain is fixed and repetitive we can manually include this information using some kind of ontology or set of constraints. On many occasions, however, there are many new situations for which only some knowledge is common and many other domain-specific relations have to be inferred. Humans are able to do this from short descriptions in natural language, describing the scene or the particular task to be performed. In this paper we apply a tool that extracts situation models and rules from natural language description to a series of exercises in a surgical domain, in which we want to identify the sequence of events that are not possible, those that are possible (but incorrect according to the exercise) and those that correspond to the exercise or plan expressed by the description in natural language. The preliminary results show that a large amount of valuable knowledge can be extracted automatically, which could be used to express domain knowledge and exercises description in languages such as event calculus that could help bridge these high-level descriptions with the low-level events that are recognised from videos.

[1]  M. J. Rupérez,et al.  Automatic supervision of gestures to guide novice surgeons during training , 2014, Surgical Endoscopy.

[2]  C. Granger Investigating causal relations by econometric models and cross-spectral methods , 1969 .

[3]  Kristina Yordanova A Simple Model for Improving the Performance of the Stanford Parser for Action Detection in Textual Instructions , 2017, RANLP.

[4]  Richard M Satava,et al.  Virtual reality surgical simulator: the first steps. 1993. , 2006, Clinical orthopaedics and related research.

[5]  R. Reznick,et al.  Verbal feedback from an expert is more effective than self-accessed feedback about motion efficiency in learning new surgical skills. , 2007, American journal of surgery.

[6]  Regina Barzilay,et al.  Learning High-Level Planning from Text , 2012, ACL.

[7]  Kristina Y. Yordanova TextToHBM: A Generalised Approach to Learning Models of Human Behaviour for Activity Recognition from Textual Instructions , 2017, AAAI Workshops.

[8]  Avirup Sil,et al.  Extracting STRIPS Representations of Actions and Events , 2011, RANLP.

[9]  Kristina Yordanova Discovering Causal Relations in Textual Instructions , 2015, RANLP.

[10]  Raymond J. Mooney,et al.  Learning to Interpret Natural Language Navigation Instructions from Observations , 2011, Proceedings of the AAAI Conference on Artificial Intelligence.

[11]  Thomas Kirste,et al.  Learning Models of Human Behaviour from Textual Instructions , 2016, ICAART.

[12]  Wenji Mao,et al.  Automatic construction of domain theory for attack planning , 2010, 2010 IEEE International Conference on Intelligence and Security Informatics.

[13]  Luke S. Zettlemoyer,et al.  Reading between the Lines: Learning to Map High-Level Instructions to Commands , 2010, ACL.

[14]  Moritz Tenorth,et al.  Understanding and executing instructions for everyday manipulation tasks from the World Wide Web , 2010, 2010 IEEE International Conference on Robotics and Automation.

[15]  Peter A. Flach,et al.  Knowledge Acquisition by Abduction for Skills Monitoring: Application to Surgical Skills , 2016 .

[16]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.