Software evaluation via users' feedback at runtime

Users' evaluation of software at runtime is a powerful tool which enables us to capture and communicate a richer and updated knowledge on how users view software throughout its life cycle. Users understand the software as a means to meet their requirements and, thus, giving them a voice in the continuous runtime evaluation of software would naturally need to fit this level of abstraction. That is, users' evaluation feedback would mainly communicate their judgment on the role of the system in meeting their requirements. Users' runtime evaluation feedback could be used to take autonomous or semi-autonomous runtime adaptation decisions or to support developers on taking evolution and maintenance decisions. Within this picture, our research focuses on to the development of a modeling and elicitation framework of users' evaluation feedback at runtime. This includes devising mechanisms to structure such an evaluation feedback in a way that makes it easy for users to express and developers to interpret. We motivate our work and articulate the problem and the set of research questions to address in our research and the method to follow and reach them. We also discuss our initial results on the topic.

[1]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[2]  Kathryn T. Stolee,et al.  Exploring the use of crowdsourcing to support empirical studies in software engineering , 2010, ESEM '10.

[3]  Bashar Nuseibeh,et al.  Social sensing: when users become monitors , 2011, ESEC/FSE '11.

[4]  Taghi M. Khoshgoftaar,et al.  A Survey of Collaborative Filtering Techniques , 2009, Adv. Artif. Intell..

[5]  John Mylopoulos,et al.  Goal-oriented requirements analysis and reasoning in the Tropos methodology , 2005, Eng. Appl. Artif. Intell..

[6]  Karel Vredenburg,et al.  A survey of user-centered design practice , 2002, CHI.

[7]  Tore Dybå,et al.  Empirical studies of agile software development: A systematic review , 2008, Inf. Softw. Technol..

[8]  Walid Maalej,et al.  When users become collaborators: towards continuous and context-aware user input , 2009, OOPSLA Companion.

[9]  Laura Lehtola,et al.  Using the focus group method in software engineering: obtaining practitioner and user experiences , 2004, Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004. ISESE '04..

[10]  John Mylopoulos,et al.  Towards requirements-driven information systems engineering: the Tropos project , 2002, Inf. Syst..

[11]  Bashar Nuseibeh,et al.  Social Adaptation - When Software Gives Users a Voice , 2012, ENASE.

[12]  Amela Karahasanovic,et al.  A survey of controlled experiments in software engineering , 2005, IEEE Transactions on Software Engineering.

[13]  Matthew Lease,et al.  Crowdsourcing for Usability Testing , 2012, ASIST.

[14]  Ali F. Farhoomand,et al.  A structural model of end user computing satisfaction and user performance , 1996, Inf. Manag..

[15]  Michael Vitale,et al.  The Wisdom of Crowds , 2015, Cell.

[16]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[17]  Alessia Knauss On the usage of context for requirements elicitation: End-user involvement in IT ecosystems , 2012, 2012 20th IEEE International Requirements Engineering Conference (RE).

[18]  Tim Trew,et al.  Using Feature Diagrams with Context Variability to Model Multiple Product Lines for Software Supply Chains , 2008, 2008 12th International Software Product Line Conference.

[19]  Bernd Brügge,et al.  User involvement in software evolution practice: A case study , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[20]  S. Adikari,et al.  User and Usability Modeling for HCI/HMI: A Research Design , 2006, 2006 International Conference on Information and Automation.