Demonstration-based GUI Task Automation Through Interactive Training

Non-programming users should be able to train virtual robots to perform computer-based tasks for them. One would think that training in an all-digital noise-free environment should be easy. However, one-shot learning of a task is actually quite hard because so many mouse-clicks and key-presses are ambiguous. Also, recognizing individual actions in a task is not enough to reproduce the task, or to generalize it. For example, the intended reproduction of a copy-paste task could mean changing what is copied, or where it is pasted, or both. We propose a user-in-the-loop framework that supports vision for virtual-robots, letting a person easily "bottle" a GUI task such that it can be applied repeatedly in the future. While pure programming-by-demonstration is still unrealistic, we use quantitative and qualitative experiments to show that non-programming users are willing and effective at answering follow-up queries posed by our system. Our models of events and appearance are surprisingly simple, but are combined effectively to cope with varying amounts of supervision. The best available baseline, Sikuli Slide, struggled with the majority of the tests in our user study experiments. The prototype with our proposed approach successfully helped users accomplish simple linear tasks, complicated tasks (monitoring, looping, and mixed), and tasks that span across multiple executables. Even when both systems could ultimately perform a task, ours was trained and refined by the user in less time.

[1]  Tessa Lau,et al.  Why PBD systems fail: Lessons learned for usable AI , 2008 .

[2]  Rob Miller,et al.  Sikuli: using GUI screenshots for search and automation , 2009, UIST '09.

[3]  Scott P. Robertson,et al.  Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , 1991 .

[4]  Mike Y. Chen,et al.  EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration , 2014, CHI.

[5]  Morgan Dixon,et al.  Content and hierarchy in pixel-based methods for reverse engineering interface structure , 2011, CHI.

[6]  Tessa A. Lau,et al.  Sheepdog: learning procedures for technical support , 2004, IUI '04.

[7]  Gordon W. Paynter,et al.  Automating iterative tasks with programming by demonstration , 2000 .

[8]  Henry Lieberman,et al.  Watch what I do: programming by demonstration , 1993 .

[9]  Tom Yeh,et al.  Associating the visual representation of user interfaces with their internal structures and metadata , 2011, UIST.

[10]  Eser Kandogan,et al.  Koala: capture, share, automate, personalize business processes on the web , 2007, CHI.

[11]  Li Wang,et al.  Discriminative human action segmentation and recognition using semi-Markov model , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[12]  Tovi Grossman,et al.  Chronicle: capture, exploration, and playback of document workflow histories , 2010, UIST.

[13]  Morgan Dixon,et al.  Prefab layers and prefab annotations: extensible pixel-based interpretation of graphical interfaces , 2014, UIST.

[14]  Tovi Grossman,et al.  Waken: reverse engineering usage information and interface structure from software videos , 2012, UIST '12.

[15]  Scott E. Hudson,et al.  Automatically identifying targets users interact with during real world tasks , 2010, IUI '10.

[16]  Mira Dontcheva,et al.  Pause-and-play: automatically linking screencast video tutorials with applications , 2011, UIST.

[17]  Rob Miller,et al.  GUI testing using computer vision , 2010, CHI.

[18]  Takeo Igarashi,et al.  Generating photo manipulation tutorials by demonstration , 2009, ACM Trans. Graph..

[19]  Sumit Gulwani,et al.  Programming by Examples - and its applications in Data Wrangling , 2016, Dependable Software Systems Engineering.

[20]  Li Wang,et al.  Human Action Segmentation and Recognition Using Discriminative Semi-Markov Models , 2011, International Journal of Computer Vision.

[21]  Fernando De la Torre,et al.  Joint segmentation and classification of human actions in video , 2011, CVPR 2011.

[22]  Larry S. Davis,et al.  Creating contextual help for GUIs using screenshots , 2011, UIST.

[23]  Eben M. Haber,et al.  CoScripter: automating & sharing how-to knowledge in the enterprise , 2008, CHI.