Inference of development activities from interaction with uninstrumented applications

Studying developers’ behavior in software development tasks is crucial for designing effective techniques and tools to support developers’ daily work. In modern software development, developers frequently use different applications including IDEs, Web Browsers, documentation software (such as Office Word, Excel, and PDF applications), and other tools to complete their tasks. This creates significant challenges in collecting and analyzing developers’ behavior data. Researchers usually instrument the software tools to log developers’ behavior for further studies. This is feasible for studies on development activities using specific software tools. However, instrumenting all software tools commonly used in real work settings is difficult and requires significant human effort. Furthermore, the collected behavior data consist of low-level and fine-grained event sequences, which must be abstracted into high-level development activities for further analysis. This abstraction is often performed manually or based on simple heuristics. In this paper, we propose an approach to address the above two challenges in collecting and analyzing developers’ behavior data. First, we use our ActivitySpace framework to improve the generalizability of the data collection. ActivitySpace uses operating-system level instrumentation to track developer interactions with a wide range of applications in real work settings. Secondly, we use a machine learning approach to reduce the human effort to abstract low-level behavior data. Specifically, considering the sequential nature of the interaction data, we propose a Condition Random Field (CRF) based approach to segment and label the developers’ low-level actions into a set of basic, yet meaningful development activities. To validate the generalizability of the proposed data collection approach, we deploy the ActivitySpace framework in an industry partner’s company and collect the real working data from ten professional developers’ one-week work in three actual software projects. The experiment with the collected data confirms that with initial human-labeled training data, the CRF model can be trained to infer development activities from low-level actions with reasonable accuracy within and across developers and software projects. This suggests that the machine learning approach is promising in reducing the human efforts required for behavior data analysis.

[1]  Zhenchang Xing,et al.  ActivitySpace: A Remembrance Framework to Support Interapplication Information Needs , 2015, 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[2]  Antonio C. Siochi,et al.  A study of computer-supported user interface evaluation using maximal repeating pattern analysis , 1991, CHI.

[3]  Andrew McCallum,et al.  Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.

[4]  Stas Negara,et al.  Use, disuse, and misuse of automated refactorings , 2012, 2012 34th International Conference on Software Engineering (ICSE).

[5]  David Lo,et al.  In-game action list segmentation and labeling in real-time strategy games , 2012, 2012 IEEE Conference on Computational Intelligence and Games (CIG).

[6]  Brian D. Fisher,et al.  Managing software change tasks: an exploratory study , 2005, 2005 International Symposium on Empirical Software Engineering, 2005..

[7]  Thomas Fritz,et al.  Developers' code context models for change tasks , 2014, SIGSOFT FSE.

[8]  Gautam Shroff,et al.  Distributed side-by-side programming , 2009, 2009 ICSE Workshop on Cooperative and Human Aspects on Software Engineering.

[9]  Rachel K. E. Bellamy,et al.  Using information scent to model the dynamic foraging behavior of programmers in maintenance tasks , 2008, CHI.

[10]  Jing Li,et al.  Extracting and analyzing time-series HCI data from screen-captured task videos , 2016, Empirical Software Engineering.

[11]  F. Wilcoxon Individual Comparisons by Ranking Methods , 1945 .

[12]  Georgios Gousios,et al.  When, how, and why developers (do not) test in their IDEs , 2015, ESEC/SIGSOFT FSE.

[13]  Adam L. Berger,et al.  A Maximum Entropy Approach to Natural Language Processing , 1996, CL.

[14]  Penelope M. Sanderson,et al.  Exploratory Sequential Data Analysis: Foundations , 1994, Hum. Comput. Interact..

[15]  David Lo,et al.  Will Fault Localization Work for These Failures? An Automated Approach to Predict Effectiveness of Fault Localization Tools , 2013, 2013 IEEE International Conference on Software Maintenance.

[16]  Zhenchang Xing,et al.  Tracking and Analyzing Cross-Cutting Activities in Developers' Daily Work (N) , 2015, 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE).

[17]  Siau-Cheng Khoo,et al.  A discriminative model approach for accurate duplicate bug report retrieval , 2010, 2010 ACM/IEEE 32nd International Conference on Software Engineering.

[18]  Brad A. Myers,et al.  An Exploratory Study of How Developers Seek, Relate, and Collect Relevant Information during Software Maintenance Tasks , 2006, IEEE Transactions on Software Engineering.

[19]  Martin P. Robillard,et al.  Asking and answering questions about unfamiliar APIs: An exploratory study , 2012, 2012 34th International Conference on Software Engineering (ICSE).

[20]  Michele Lanza,et al.  I know what you did last summer: an investigation of how developers spend their time , 2015, ICPC '15.

[21]  Susan Wiedenbeck,et al.  An exploratory study of program comprehension strategies of procedural and object-oriented programmers , 2001, Int. J. Hum. Comput. Stud..

[22]  W. Bruce Croft,et al.  Table extraction using conditional random fields , 2003, DG.O.

[23]  James D. Herbsleb,et al.  Reading the documentation of invoked API functions in program comprehension , 2009, 2009 IEEE 17th International Conference on Program Comprehension.

[24]  David Lo,et al.  Improved Duplicate Bug Report Identification , 2012, 2012 16th European Conference on Software Maintenance and Reengineering.

[25]  Rachel K. E. Bellamy,et al.  Modeling programmer navigation: A head-to-head empirical evaluation of predictive models , 2011, 2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC).

[26]  Gene P. Sackett,et al.  Theory and applications in mental retardation , 1978 .

[27]  Zhenchang Xing,et al.  Inference of development activities from interaction with uninstrumented applications , 2018, ICSE.

[28]  Brad A. Myers,et al.  A framework and methodology for studying the causes of software errors in programming systems , 2005, J. Vis. Lang. Comput..

[29]  Christopher D. Hundhausen,et al.  A methodology for analyzing the temporal evolution of novice programs based on semantic components , 2006, ICER '06.

[30]  Scott E. Hudson,et al.  Automatically identifying targets users interact with during real world tasks , 2010, IUI '10.

[31]  Zhenchang Xing,et al.  An exploratory study of feature location process: Distinct phases, recurring patterns, and elementary actions , 2011, 2011 27th IEEE International Conference on Software Maintenance (ICSM).

[32]  David Lo,et al.  A Comparative Study of Supervised Learning Algorithms for Re-opened Bug Prediction , 2013, CSMR 2013.

[33]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[34]  Rongxin Wu,et al.  ReLink: recovering links between bugs and changes , 2011, ESEC/FSE '11.

[35]  Andrew McCallum,et al.  Maximum Entropy Markov Models for Information Extraction and Segmentation , 2000, ICML.

[36]  Ferdian Thung,et al.  Automatic Defect Categorization , 2012, 2012 19th Working Conference on Reverse Engineering.

[37]  A. Marie Vans,et al.  Program understanding behavior during debugging of large scale software , 1997, ESP '97.

[38]  Gail C. Murphy,et al.  Who should fix this bug? , 2006, ICSE.

[39]  Thomas G. Dietterich,et al.  Learning Probabilistic Behavior Models in Real-Time Strategy Games , 2011, AIIDE.

[40]  Anthony F. Norcio,et al.  The Effect of Human Memory Organization on Code Reviews under Different Single and Pair Code Reviewing Scenarios , 2005 .

[41]  J. Fleiss Measuring nominal scale agreement among many raters. , 1971 .

[42]  Rachel K. E. Bellamy,et al.  How Programmers Debug, Revisited: An Information Foraging Theory Perspective , 2013, IEEE Transactions on Software Engineering.

[43]  Hongfang Liu,et al.  An investigation of the effect of module size on defect prediction using static measures , 2005, PROMISE@ICSE.

[44]  Zhenchang Xing,et al.  What help do developers seek, when and how? , 2013, 2013 20th Working Conference on Reverse Engineering (WCRE).

[45]  Yann-Gaël Guéhéneuc,et al.  Support vector machines for anti-pattern detection , 2012, 2012 Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering.

[46]  Anh Tuan Nguyen,et al.  Multi-layered approach for recovering links between bug reports and fixes , 2012, SIGSOFT FSE.

[47]  Biing-Hwang Juang,et al.  Fundamentals of speech recognition , 1993, Prentice Hall signal processing series.

[48]  A. Sillitti,et al.  Automated Segmentation of Development Sessions Into Task-Related Subsections , 2009 .

[49]  Gail C. Murphy,et al.  Comparing episodic and semantic interfaces for task boundary identification , 2007, CASCON.

[50]  Sean R. Eddy,et al.  Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids , 1998 .

[51]  Martin P. Robillard,et al.  How effective developers investigate source code: an exploratory study , 2004, IEEE Transactions on Software Engineering.

[52]  Tom Yeh,et al.  Associating the visual representation of user interfaces with their internal structures and metadata , 2011, UIST.