Static analysis tools find silly mistakes, confusing code, bad practices and property violations. But software developers and organizations may or may not care about all these warnings, depending on how they impact code behavior and other factors. In the past, we have tried to identify important warnings by asking users to rate them as severe, low impact or not a bug. In this paper, we observe that the user's rating may be more complicated depending on whether the warning is feasible, changes code behavior, occurs in deployed code and other factors. To better model this, we ask users to review warnings using a checklist which enables more detailed reviews. We find that reviews are consistent across users and across checklist questions, though some users may disagree about whether to fix or filter out certain bug classes.
[1]
William Pugh,et al.
A report on a survey and study of static analysis users
,
2008,
DEFECTS '08.
[2]
Michael D. Ernst,et al.
Which warnings should I fix first?
,
2007,
ESEC-FSE '07.
[3]
Vibha Sazawal,et al.
Path projection for user-centered static analysis tools
,
2008,
PASTE '08.
[4]
J. David Morgenthaler,et al.
Evaluating static analysis defect warnings on production software
,
2007,
PASTE '07.
[5]
David Hovemeyer,et al.
Finding more null pointer bugs, but not too many
,
2007,
PASTE '07.
[6]
Thomas Ball,et al.
Static analysis tools as early indicators of pre-release defect density
,
2005,
ICSE.