Information Retrieval (IR) plays a pivotal role in diverse Software Engineering (SE) tasks, e.g., bug localization and triaging, code retrieval, requirements analysis, etc. The choice of similarity measure is the core component of an IR technique. The performance of any IR method critically depends on selecting an appropriate similarity measure for the given application domain. Since different SE tasks operate on different document types like bug reports, software descriptions, source code, etc. that often contain non-standard domain-specific vocabulary, it is essential to understand which similarity measures work best for different SE documents.
This paper presents two case studies on the effect of different similarity measure on various SE documents w.r.t. two tasks: (i) project recommendation: finding similar GitHub projects and (ii) bug localization: retrieving buggy source file(s) correspond to a bug report. These tasks contain a diverse combination of textual (i.e. description, readme) and code (i.e. source code, API, import package) artifacts. We observe that the performance of IR models varies when applied to different artifact types. We find that, in general, the context-aware models achieve better performance on textual artifacts. In contrast, simple keyword-based bag-of-words models perform better on code artifacts. On the other hand, the probabilistic ranking model BM25 performs better on a mixture of text and code artifacts.
We further investigate how such an informed choice of similarity measure impacts the performance of SE tools. In particular, we analyze two previously proposed tools for project recommendation and bug localization tasks, which leverage diverse software artifacts, and observe that an informed choice of similarity measure indeed leads to improved performance of the existing SE tools.
[1]
Ying Zou,et al.
Learning to rank code examples for code search engines
,
2017,
Empirical Software Engineering.
[2]
Michael I. Jordan,et al.
Advances in Neural Information Processing Systems 30
,
1995
.
[3]
Martin F. Porter,et al.
An algorithm for suffix stripping
,
1997,
Program.
[4]
Gerard Salton,et al.
A vector space model for automatic indexing
,
1975,
CACM.
[5]
Cristina V. Lopes,et al.
How Well Do Search Engines Support Code Retrieval on the Web?
,
2011,
TSEM.
[6]
Letha H. Etzkorn,et al.
Configuring latent Dirichlet allocation based feature location
,
2014,
Empirical Software Engineering.
[7]
Michael Hucka,et al.
Software search is not a science, even among scientists
,
2016,
J. Syst. Softw..
[8]
Razvan C. Bunescu,et al.
Mapping Bug Reports to Relevant Files: A Ranking Model, a Fine-Grained Benchmark, and Feature Evaluation
,
2016,
IEEE Transactions on Software Engineering.
[9]
T. Landauer,et al.
A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.
,
1997
.
[10]
Ahmed E. Hassan,et al.
The Impact of Classifier Configuration and Classifier Combination on Bug Localization
,
2013,
IEEE Transactions on Software Engineering.
[11]
Christopher D. Manning,et al.
Introduction to Information Retrieval
,
2010,
J. Assoc. Inf. Sci. Technol..
[12]
Zellig S. Harris,et al.
Distributional Structure
,
1954
.
[13]
David M. Blei,et al.
Hierarchical relational models for document networks
,
2009,
0909.4331.