A case study of post-deployment user feedback triage

Many software requirements are identified only after a product is deployed, once users have had a chance to try the software and provide feedback. Unfortunately, addressing such feedback is not always straightforward, even when a team is fully invested in user-centered design. To investigate what constrains a teams evolution decisions, we performed a 6-month field study of a team employing iterative user-centered design methods to the design, deployment and evolution of a web application for a university community. Across interviews with the team, analyses of their bug reports, and further interviews with both users and non-adopters of the application, we found most of the constraints on addressing user feedback emerged from conflicts between users heterogeneous use of information and inflexible assumptions in the team's software architecture derived from earlier user research. These findings highlight the need for new approaches to expressing and validating assumptions from user research as software evolves.

[1]  Glenford J. Myers,et al.  Structured Design , 1974, IBM Syst. J..

[2]  Gitte Lindgaard,et al.  Usability testing: what have we overlooked? , 2007, CHI.

[3]  Michelle Cartwright,et al.  An Empirical Investigation of an Object-Oriented Software System , 2000, IEEE Trans. Software Eng..

[4]  J. Baron Thinking and deciding, 3rd ed. , 2000 .

[5]  Bill Buxton,et al.  Sketching User Experiences: Getting the Design Right and the Right Design , 2007 .

[6]  Alan F. Blackwell,et al.  First steps in programming: a rationale for attention investment models , 2002, Proceedings IEEE 2002 Symposia on Human Centric Computing Languages and Environments.

[7]  Katie Minardo Scott FEATUREIs usability obsolete? , 2009, INTR.

[8]  Alfred V. Aho,et al.  Do Crosscutting Concerns Cause Defects? , 2008, IEEE Transactions on Software Engineering.

[9]  Neil Pollock,et al.  When Is a Work-Around? Conflict and Negotiation in Computer Systems Development , 2005 .

[10]  Mark W. Newman,et al.  The infrastructure problem in HCI , 2010, CHI.

[11]  Monique Janneck Challenges of software recontextualization: lessons learned , 2010, CHI EA '10.

[12]  Richard P. Bagozzi,et al.  The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift , 2007, J. Assoc. Inf. Syst..

[13]  Robert S. Arnold,et al.  Software Change Impact Analysis , 1996 .

[14]  Helen Nissenbaum,et al.  Bias in computer systems , 1996, TOIS.

[15]  Susan Wiedenbeck,et al.  An exploratory study of program comprehension strategies of procedural and object-oriented programmers , 2001, Int. J. Hum. Comput. Stud..

[16]  Robert DeLine,et al.  Information Needs in Collocated Software Development Teams , 2007, 29th International Conference on Software Engineering (ICSE'07).

[17]  Shaowen Bardzell,et al.  Feminist HCI: taking stock and outlining an agenda for design , 2010, CHI.

[18]  N. Nagappan,et al.  Use of relative code churn measures to predict system defect density , 2005, Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005..

[19]  Daniel Kahneman,et al.  Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias , 1991 .

[20]  Gerhard Fischer,et al.  Meta-design , 2004, Commun. ACM.

[21]  William W. Gaver,et al.  Design: Cultural probes , 1999, INTR.

[22]  Peter Naur,et al.  Programming as theory building , 1985 .

[23]  M.M. Lehman,et al.  Programs, life cycles, and laws of software evolution , 1980, Proceedings of the IEEE.

[24]  Tovi Grossman,et al.  A survey of software learnability: metrics, methodologies and guidelines , 2009, CHI.

[25]  James D. Hollan,et al.  Distributed cognition: toward a new foundation for human-computer interaction research , 2000, TCHI.